query_id
stringlengths 32
32
| query
stringlengths 6
3.9k
| positive_passages
listlengths 1
21
| negative_passages
listlengths 10
100
| subset
stringclasses 7
values |
---|---|---|---|---|
6735a6e2951888f65632543726b07d1e
|
Local Fisher discriminant analysis for supervised dimensionality reduction
|
[
{
"docid": "7655df3f32e6cf7a5545ae2231f71e7c",
"text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.",
"title": ""
},
{
"docid": "da168a94f6642ee92454f2ea5380c7f3",
"text": "One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.",
"title": ""
}
] |
[
{
"docid": "0db63eb4f0b54767f97138312d5da8cd",
"text": "In real-world applications such as emotion recognition from recorded brain activity, data are captured from electrodes over time. These signals constitute a multidimensional time series. In this paper, Echo State Network (ESN), a recurrent neural network with a great success in time series prediction and classification, is optimized with different neural plasticity rules for classification of emotions based on electroencephalogram (EEG) time series. Actually, the neural plasticity rules are a kind of unsupervised learning adapted for the reservoir, i.e. the hidden layer of ESN. More specifically, an investigation of Oja’s rule, BCM rule and gaussian intrinsic plasticity rule was carried out in the context of EEG-based emotion recognition. The study, also, includes a comparison of the offline and online training of the ESN. When testing on the well-known affective benchmark ”DEAP dataset” which contains EEG signals from 32 subjects, we find that pretraining ESN with gaussian intrinsic plasticity enhanced the classification accuracy and outperformed the results achieved with an ESN pretrained with synaptic plasticity. Four classification problems were conducted in which the system complexity is increased and the discrimination is more challenging, i.e. inter-subject emotion discrimination. Our proposed method achieves higher performance over the state of the art methods.",
"title": ""
},
{
"docid": "dee24c18a7d653f3d4136031bcb6efcb",
"text": "In mobile cloud computing, application offloading is implemented as a software level solution for augmenting computing potentials of smart mobile devices. VM is one of the prominent approaches for offloading computational load to cloud server nodes. A challenging aspect of such frameworks is the additional computing resources utilization in the deployment and management of VM on Smartphone. The deployment of Virtual Machine (VM) requires computing resources for VM creation and configuration. The management of VM includes computing resources utilization in the monitoring of VM in entire lifecycle and physical resources management for VM on Smartphone. The objective of this work is to ensure that VM deployment and management requires additional computing resources on mobile device for application offloading. This paper analyzes the impact of VM deployment and management on the execution time of application in different experiments. We investigate VM deployment and management for application processing in simulation environment by using CloudSim, which is a simulation toolkit that provides an extensible simulation framework to model the simulation of VM deployment and management for application processing in cloud-computing infrastructure. VM deployment and management in application processing is evaluated by analyzing VM deployment, the execution time of applications and total execution time of the simulation. The analysis concludes that VM deployment and management require additional resources on the computing host. Therefore, VM deployment is a heavyweight approach for process offloading on smart mobile devices.",
"title": ""
},
{
"docid": "0994065c757a88373a4d97e5facfee85",
"text": "Scholarly literature suggests digital marketing skills gaps in industry, but these skills gaps are not clearly identified. The research aims to specify any digital marketing skills gaps encountered by professionals working in communication industries. In-depth interviews were undertaken with 20 communication industry professionals. A focus group followed, testing the rigour of the data. We find that a lack of specific technical skills; a need for best practice guidance on evaluation metrics, and a lack of intelligent futureproofing for dynamic technological change and development are skills gaps currently challenging the communication industry. However, the challenge of integrating digital marketing approaches with established marketing practice emerges as the key skills gap. Emerging from the key findings, a Digital Marketer Model was developed, highlighting the key competencies and skills needed by an excellent digital marketer. The research concludes that guidance on best practice, focusing upon evaluation metrics, futureproofing and strategic integration, needs to be developed for the communication industry. The Digital Marketing Model should be subject to further testing in industry and academia. Suggestions for further research are discussed.",
"title": ""
},
{
"docid": "046f15ecf1037477b10bfb4fa315c9c9",
"text": "With the rapid proliferation of camera-equipped smart devices (e.g., smartphones, pads, tablets), visible light communication (VLC) over screen-camera links emerges as a novel form of near-field communication. Such communication via smart devices is highly competitive for its user-friendliness, security, and infrastructure-less (i.e., no dependency on WiFi or cellular infrastructure). However, existing approaches mostly focus on improving the transmission speed and ignore the transmission reliability. Considering the interplay between the transmission speed and reliability towards effective end-to-end communication, in this paper, we aim to boost the throughput over screen-camera links by enhancing the transmission reliability. To this end, we propose RDCode, a robust dynamic barcode which enables a novel packet-frame-block structure. Based on the layered structure, we design different error correction schemes at three levels: intra-blocks, inter-blocks and inter-frames, in order to verify and recover the lost blocks and frames. Finally, we implement RDCode and experimentally show that RDCode reaches a high level of transmission reliability (e.g., reducing the error rate to 10%) and yields a at least doubled transmission rate, compared with the existing state-of-the-art approach COBRA.",
"title": ""
},
{
"docid": "7c81ddf6b7e6853ac1d964f1c0accd40",
"text": "DSM-5 distinguishes between paraphilias and paraphilic disorders. Paraphilias are defined as atypical, yet not necessarily disordered, sexual practices. Paraphilic disorders are instead diseases, which include distress, impairment in functioning, or entail risk of harm one's self or others. Hence, DSM-5 new approach to paraphilias demedicalizes and destigmatizes unusual sexual behaviors, provided they are not distressing or detrimental to self or others. Asphyxiophilia, a dangerous and potentially deadly form of sexual masochism involving sexual arousal by oxygen deprivation, are clearly described as disorders. Although autoerotic asphyxia has been associated with estimated mortality rates ranging from 250 to 1000 deaths per year in the United States, in Italy, knowledge on this condition is very poor. Episodes of death caused by autoerotic asphyxia seem to be underestimated because it often can be confounded with suicide cases, particularly in the Italian context where family members of the victim often try to disguise autoerotic behaviors of the victims. The current paper provides a review on sexual masochism disorder with asphyxiophilia and discusses one specific case as an example to examine those conditions that may or may not influence the likelihood that death from autoerotic asphyxia be erroneously reported as suicide or accidental injury.",
"title": ""
},
{
"docid": "323d633995296611c903874aefa5cdb7",
"text": "This paper investigates the possibility of communicating through vibrations. By modulating the vibration motors available in all mobile phones, and decoding them through accelerometers, we aim to communicate small packets of information. Of course, this will not match the bit rates available through RF modalities, such as NFC or Bluetooth, which utilize a much larger bandwidth. However, where security is vital, vibratory communication may offer advantages. We develop Ripple, a system that achieves up to 200 bits/s of secure transmission using off-the-shelf vibration motor chips, and 80 bits/s on Android smartphones. This is an outcome of designing and integrating a range of techniques, including multicarrier modulation, orthogonal vibration division, vibration braking, side-channel jamming, etc. Not all these techniques are novel; some are borrowed and suitably modified for our purposes, while others are unique to this relatively new platform of vibratory communication.",
"title": ""
},
{
"docid": "ae3b4397ebc759bbf20850f949bc7376",
"text": "Circulating tumor cell clusters (CTC clusters) are present in the blood of patients with cancer but their contribution to metastasis is not well defined. Using mouse models with tagged mammary tumors, we demonstrate that CTC clusters arise from oligoclonal tumor cell groupings and not from intravascular aggregation events. Although rare in the circulation compared with single CTCs, CTC clusters have 23- to 50-fold increased metastatic potential. In patients with breast cancer, single-cell resolution RNA sequencing of CTC clusters and single CTCs, matched within individual blood samples, identifies the cell junction component plakoglobin as highly differentially expressed. In mouse models, knockdown of plakoglobin abrogates CTC cluster formation and suppresses lung metastases. In breast cancer patients, both abundance of CTC clusters and high tumor plakoglobin levels denote adverse outcomes. Thus, CTC clusters are derived from multicellular groupings of primary tumor cells held together through plakoglobin-dependent intercellular adhesion, and though rare, they greatly contribute to the metastatic spread of cancer.",
"title": ""
},
{
"docid": "379df071aceaee1be2228070f0245257",
"text": "This paper reports a SiC-based solid-state circuit breaker (SSCB) with an adjustable current-time (I-t) tripping profile for both ultrafast short circuit protection and overload protection. The tripping time ranges from 0.5 microsecond to 10 seconds for a fault current ranging from 0.8X to 10X of the nominal current. The I-t tripping profile, adjustable by choosing different resistance values in the analog control circuit, can help avoid nuisance tripping of the SSCB due to inrush transient current. The maximum thermal capability of the 1200V SiC JFET static switch in the SSCB is investigated to set a practical thermal limit for the I-t tripping profile. Furthermore, a low fault current ‘blind zone’ limitation of the prior SSCB design is discussed and a new circuit solution is proposed to operate the SSCB even under a low fault current condition. Both simulation and experimental results are reported.",
"title": ""
},
{
"docid": "64723e2bb073d0ba4412a9affef16107",
"text": "The debate on the entrepreneurial university has raised questions about what motivates academics to engage with industry. This paper provides evidence, based on survey data for a comprehensive sample of UK investigators in the physical and engineering sciences. Our results suggest that most academics engage with industry to further their research rather than to commercialize their knowledge. However, there are differences in terms of the channels of engagement. While patenting and spin-off company formation is motivated exclusively by commercialization, joint research, contract research and consulting are strongly informed by research-related motives. We conclude that policy should refrain from focusing on monetary incentives for industry engagement and consider a broader range of incentives for promoting interaction between academia and industry.",
"title": ""
},
{
"docid": "0bcb2fdf59b88fca5760bfe456d74116",
"text": "A good distance metric is crucial for unsupervised learning from high-dimensional data. To learn a metric without any constraint or class label information, most unsupervised metric learning algorithms appeal to projecting observed data onto a low-dimensional manifold, where geometric relationships such as local or global pairwise distances are preserved. However, the projection may not necessarily improve the separability of the data, which is the desirable outcome of clustering. In this paper, we propose a novel unsupervised adaptive metric learning algorithm, called AML, which performs clustering and distance metric learning simultaneously. AML projects the data onto a low-dimensional manifold, where the separability of the data is maximized. We show that the joint clustering and distance metric learning can be formulated as a trace maximization problem, which can be solved via an iterative procedure in the EM framework. Experimental results on a collection of benchmark data sets demonstrated the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "3a1705ac3a95ec08280995d15ce8d705",
"text": "Although hybrid-electric vehicles have been studied mainly with the aim of increasing fuel economy, little has been done in order to improve both fuel economy and performance. However, vehicular-dynamic-performance characteristics such as acceleration and climbing ability are of prime importance in military vehicles such as the high-mobility multipurpose wheeled vehicle (HMMWV). This paper concentrates on the models that describe hybridized HMMWV vehicles and the simulation results of those models. Parallel and series configurations have been modeled using the advanced-vehicle-simulator software developed by the National Renewable Energy Laboratory. Both a retrofit approach and a constant-power approach have been tested, and the results are compared to the conventional model results. In addition, the effects of using smaller engines than the existing ones in hybrid HMMWV drive trains have been studied, and the results are compared to the data collected from an actual implementation of such a vehicle. Moreover, the integrated-starter/alternator (ISA) configuration has been considered, and the results were encouraging",
"title": ""
},
{
"docid": "76ecd4ba20333333af4d09b894ff29fc",
"text": "This study is an application of social identity theory to feminist consciousness and activism. For women, strong gender identifications may enhance support for equality struggles, whereas for men, they may contribute to backlashes against feminism. University students (N � 276), primarily Euroamerican, completed a measure of gender self-esteem (GSE, that part of one’s selfconcept derived from one’s gender), and two measures of feminism. High GSE in women and low GSE in men were related to support for feminism. Consistent with past research, women were more supportive of feminism than men, and in both genders, support for feminist ideas was greater than self-identification as a feminist.",
"title": ""
},
{
"docid": "75df2d0ef3d4a582e68338f1f515aa07",
"text": "While extensive studies on relation extraction have been conducted in the last decade, statistical systems based on supervised learning are still limited because they require large amounts of training data to achieve high performance. In this paper, we develop a cross-lingual annotation projection method that leverages parallel corpora to bootstrap a relation detector without significant annotation efforts for a resource-poor language. In order to make our method more reliable, we introduce three simple projection noise reduction methods. The merit of our method is demonstrated through a novel Korean relation detection task.",
"title": ""
},
{
"docid": "29b4a9f3b3da3172e319d11b8f938a7b",
"text": "Since social media have become very popular during the past few years, researchers have been focusing on being able to automatically process and extract sentiments information from large volume of social media data. This paper contributes to the topic, by focusing on sentiment analysis for Chinese social media. In this paper, we propose to rely on Part of Speech (POS) tags in order to extract unigrams and bigrams features. Bigrams are generated according to the grammatical relation between consecutive words. With those features, we have shown that focusing on a specific topic allows to reach higher estimation accuracy.",
"title": ""
},
{
"docid": "3157970218dc3761576345c0e01e3121",
"text": "This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select and execute among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu",
"title": ""
},
{
"docid": "ee141b7fd5c372fb65d355fe75ad47af",
"text": "As 100-Gb/s coherent systems based on polarization- division multiplexed quadrature phase shift keying (PDM-QPSK), with aggregate wavelength-division multiplexed (WDM) capacities close to 10 Tb/s, are getting widely deployed, the use of high-spectral-efficiency quadrature amplitude modulation (QAM) to increase both per-channel interface rates and aggregate WDM capacities is the next evolutionary step. In this paper we review high-spectral-efficiency optical modulation formats for use in digital coherent systems. We look at fundamental as well as at technological scaling trends and highlight important trade-offs pertaining to the design and performance of coherent higher-order QAM transponders.",
"title": ""
},
{
"docid": "1f770561b6f535e36dfb5e43326780a5",
"text": "The Red Brick WarehouseTMis a commercial Relational Database Management System designed specifically for query, decision support, and data warehouse applications. Red Brick Warehouse is a software-only system providing ANSI SQL support in an open cliendserver environment. Red Brick Warehouse is distinguished from traditional RDBMS products by an architecture optimized to deliver high performance in read-mostly, high-intensity query applications. In these applications, the workload is heavily biased toward complex SQL SELECT operations that read but do not update the database. The average unit of work is very large, and typically involves multi-table joins, aggregation, duplicate elimination, and sorting. Multi-user concurrency is moderate, with typical systems supporting 50 to 500 concurrent user sessions. Query databases are often very large, with tables ranging from 100 million to many billion rows and occupying 50 Gigabytes to 2 Terabytes, Databases are populated by massive bulk-load operations on an hourly, daily, or weekly cycle. Time-series and historical data are maintained for months or years. Red Brick Warehouse makes use of parallel processing as well as other specialized algorithms to achieve outstanding performance and scalability on cost-effective hardware platforms.",
"title": ""
},
{
"docid": "cebfc5224413c5acb7831cbf29ae5a8e",
"text": "Radio Frequency (RF) Energy Harvesting holds a pro mising future for generating a small amount of electrical power to drive partial circuits in wirelessly communicating electronics devices. Reducing power consumption has become a major challenge in wireless sensor networks. As a vital factor affecting system cost and lifetime, energy consumption in wireless sensor networks is an emerging and active res arch area. This chapter presents a practical approach for RF Energy harvesting and man agement of the harvested and available energy for wireless sensor networks using the Impro ved Energy Efficient Ant Based Routing Algorithm (IEEABR) as our proposed algorithm. The c hapter looks at measurement of the RF power density, calculation of the received power, s torage of the harvested power, and management of the power in wireless sensor networks . The routing uses IEEABR technique for energy management. Practical and real-time implemen tatio s of the RF Energy using PowercastTM harvesters and simulations using the ene rgy model of our Libelium Waspmote to verify the approach were performed. The chapter con cludes with performance analysis of the harvested energy, comparison of IEEABR and other tr aditional energy management techniques, while also looking at open research areas of energy harvesting and management for wireless sensor networks.",
"title": ""
},
{
"docid": "6a115059c4730fcdfa5dfe0db82243c5",
"text": "Measuring the similarity between rhythms is a fundamental problem in computational music theory, with many applications such as music information retrieval and copyright infringement resolution. A common way to represent a rhythm is as a binary sequence where a zero denotes a rest (silence) and a one represents a beat or note onset. This paper compares various measures of rhythm similarity including the Hamming distance, the Euclidean interval-vector distance, the interval-difference distance measure of Coyle and Shmulevich, the swap distance, and the chronotonic distance measures of Gustafson and Hofmann-Engl. Traditionally, rhythmic similarity measures are compared according to how well rhythms may be recognized with them, how efficiently they can be retrieved from a data base, or how well they model human perception and cognition of rhythms. In contrast, here similarity measures are compared on the basis of how much insight they provide about the structural inter-relationships that exist within families of rhythms, when phylogenetic trees and graphs are computed from the distance matrices determined by these similarity measures. For two collections of rhythms, namely the 4/4 time and 12/8 time clave-bell time lines used in traditional African and Afro-American music, the chronotonic and swap distances appear to be superior to the other measures, and each has its own atractive features. The similarity measures are also compared according to their computational complexity.",
"title": ""
},
{
"docid": "b4529985e1fa4e156900c9825fc1c6f9",
"text": "This paper presents the SWaT testbed, a modern industrial control system (ICS) for security research and training. SWaT is currently in use to (a) understand the impact of cyber and physical attacks on a water treatment system, (b) assess the effectiveness of attack detection algorithms, (c) assess the effectiveness of defense mechanisms when the system is under attack, and (d) understand the cascading effects of failures in one ICS on another dependent ICS. SWaT consists of a 6-stage water treatment process, each stage is autonomously controlled by a local PLC. The local fieldbus communications between sensors, actuators, and PLCs is realized through alternative wired and wireless channels. While the experience with the testbed indicates its value in conducting research in an active and realistic environment, it also points to design limitations that make it difficult for system identification and attack detection in some experiments.",
"title": ""
}
] |
scidocsrr
|
f28b308f7ff68b93c283f0e0a3133812
|
A Soft-Robotic Gripper With Enhanced Object Adaptation and Grasping Reliability
|
[
{
"docid": "fda80f2f0eb57a101dde880b48a80ba4",
"text": "In this paper, we analyze and compare existing human grasp taxonomies and synthesize them into a single new taxonomy (dubbed “The GRASP Taxonomy” after the GRASP project funded by the European Commission). We consider only static and stable grasps performed by one hand. The goal is to extract the largest set of different grasps that were referenced in the literature and arrange them in a systematic way. The taxonomy provides a common terminology to define human hand configurations and is important in many domains such as human-computer interaction and tangible user interfaces where an understanding of the human is basis for a proper interface. Overall, 33 different grasp types are found and arranged into the GRASP taxonomy. Within the taxonomy, grasps are arranged according to 1) opposition type, 2) the virtual finger assignments, 3) type in terms of power, precision, or intermediate grasp, and 4) the position of the thumb. The resulting taxonomy incorporates all grasps found in the reviewed taxonomies that complied with the grasp definition. We also show that due to the nature of the classification, the 33 grasp types might be reduced to a set of 17 more general grasps if only the hand configuration is considered without the object shape/size.",
"title": ""
},
{
"docid": "f79bb538abc2612d59749096da193d3a",
"text": "In medical and biotechnology fields, soft devices are required because of their high safety from low mechanical impedance. FMA (Flexible Microactuator) is one of the typical soft actuators. It consists of fiber-reinforced rubber structure with multi air chambers and realizes bending motion pneumatically. It has been applied to robot hands, robot legs and so on. High potential of FMA has been confirmed by many experiments reported in several papers. However in fabrication process of the actuator, it is difficult to embed the reinforced fiber in the rubber structure. In this study, we aim at development of a fiber less FMA realizing quite large motion, which can be said curling motion, and a soft hand using the actuators. We design the actuator without fiber using nonlinear FEM (Finite Element Method) and derived efficient shape. The actuator is fabricated through micro rubber casting process including micro machining process for molds, micro vacuum rubber molding process and rubber bonding process with surface improvement by excimer light. Basic driving experiments of the actuator showed that it realized curling motion which agreed well with FEM results. And the actuator could grasp a fish egg without breaking. Additionally, we made a soft hand consisting of three curling actuators. This hand also could be manufactured by simple casting process. The developed hand works opening and closing motions well.",
"title": ""
}
] |
[
{
"docid": "1f5a244d4ef3e6129d14c50fb26bc9eb",
"text": "The authors describe blockchain’s fundamental concepts, provide perspectives on its challenges and opportunities, and trace its origins from the Bitcoin digital cash system to recent applications.",
"title": ""
},
{
"docid": "c3dba6bf97368e6fb707ea622ca5fbfc",
"text": "This paper studies the problem of obtaining depth information from focusing and defocusing, which have long been noticed as important sources of depth information for human and machine vision. In depth from focusing, we try to eliminate the local maxima problem which is the main source of inaccuracy in focusing; in depth from defocusing, a new computational model is proposed to achieve higher accuracy. The major contributions of this paper are: (1) In depth from focusing, instead of the popular Fibonacci search which is often trapped in local maxima, we propose the combination of Fibonacci search and curve tting, which leads to an unprecedentedly accurate result; (2) New model of the blurring e ect which takes the geometric blurring as well as the imaging blurring into consideration, and the calibration of the blurring model; (3) In spectrogram-based depth from defocusing, an iterative estimation method is proposed to decrease or eliminate the window e ect. This paper reports focus ranging with less than 1/1000 error and the defocus ranging with about 1/200 error. With this precision, depth from focus ranging is becoming competitive with stereo vision for reconstructing 3D depth information.",
"title": ""
},
{
"docid": "98571cb7f32b389683e8a9e70bd87339",
"text": "We identify two issues with the family of algorithms based on the Adversarial Imitation Learning framework. The first problem is implicit bias present in the reward functions used in these algorithms. While these biases might work well for some environments, they can also lead to sub-optimal behavior in others. Secondly, even though these algorithms can learn from few expert demonstrations, they require a prohibitively large number of interactions with the environment in order to imitate the expert for many real-world applications. In order to address these issues, we propose a new algorithm called Discriminator-Actor-Critic that uses off-policy Reinforcement Learning to reduce policy-environment interaction sample complexity by an average factor of 10. Furthermore, since our reward function is designed to be unbiased, we can apply our algorithm to many problems without making any task-specific adjustments.",
"title": ""
},
{
"docid": "ba5cd7dcf8d7e9225df1d9dc69c95c11",
"text": "e eective of information retrieval (IR) systems have become more important than ever. Deep IR models have gained increasing aention for its ability to automatically learning features from raw text; thus, many deep IR models have been proposed recently. However, the learning process of these deep IR models resemble a black box. erefore, it is necessary to identify the dierence between automatically learned features by deep IR models and hand-craed features used in traditional learning to rank approaches. Furthermore, it is valuable to investigate the dierences between these deep IR models. is paper aims to conduct a deep investigation on deep IR models. Specically, we conduct an extensive empirical study on two dierent datasets, including Robust and LETOR4.0. We rst compared the automatically learned features and handcraed features on the respects of query term coverage, document length, embeddings and robustness. It reveals a number of disadvantages compared with hand-craed features. erefore, we establish guidelines for improving existing deep IR models. Furthermore, we compare two dierent categories of deep IR models, i.e. representation-focused models and interaction-focused models. It is shown that two types of deep IR models focus on dierent categories of words, including topic-related words and query-related words.",
"title": ""
},
{
"docid": "fce21a54f6319bcc798914a6fc4a8125",
"text": "CRISPR-Cas systems have rapidly transitioned from intriguing prokaryotic defense systems to powerful and versatile biomolecular tools. This article reviews how these systems have been translated into technologies to manipulate bacterial genetics, physiology, and communities. Recent applications in bacteria have centered on multiplexed genome editing, programmable gene regulation, and sequence-specific antimicrobials, while future applications can build on advances in eukaryotes, the rich natural diversity of CRISPR-Cas systems, and the untapped potential of CRISPR-based DNA acquisition. Overall, these systems have formed the basis of an ever-expanding genetic toolbox and hold tremendous potential for our future understanding and engineering of the bacterial world.",
"title": ""
},
{
"docid": "aebddf5d1d995587630fe7cdfa607a9d",
"text": "The ability to create and interact with large-scale domainspecific knowledge bases from unstructured/semi-structured data is the foundation for many industry-focused cognitive systems. We will demonstrate the Content Services system that provides cloud services for creating and querying highquality domain-specific knowledge bases by analyzing and integrating multiple (un/semi)structured content sources. We will showcase an instantiation of the system for a financial domain. We will also demonstrate both cross-lingual natural language queries and programmatic API calls for interacting with this knowledge base.",
"title": ""
},
{
"docid": "682b7612288807437ce7b6ccb1e418cb",
"text": "The critical attributes of episodic memory are self, autonoetic consciousness and subjectively sensed time. The aim of this paper is to present a theoretical overview of our already published researches into the nature of episodic memory over the course of time. We have developed a new method of assessing autobiographical memory (TEMPau task), which is specially designed to measure these specific aspects, based on the sense of re-experiencing events from across the entire lifespan. Based on our findings of cognitive, neuropsychological and neuroimaging studies, new insights into episodic autobiographical memories are presented, focusing on the effects of age of the subjects interacting with time interval in healthy subjects and lesioned patients. The multifaceted and complex nature of episodic memory is emphasized and it is suggested that mental time travel through subjective time, which allows individuals to re-experience specific past events through a feeling of self-awareness, is the last feature of autobiographical memory to become fully operational in development and the first feature to go in aging and most amnesias. Our findings highlight the critical role of frontotemporal areas in constructive autobiographical memory processes, and especially hippocampus, in re-experiencing episodic details from the recent or more distant past.",
"title": ""
},
{
"docid": "727d54d70802ac0b4501358bd501b0bc",
"text": "This paper presents the field testing of a street lighting monitoring and control system. The system is based on a WSN network of the large-scale type that enables the remote control of street lighting lamps. The system also enables savings in terms of the electric power and maintenance costs. The architecture uses integrated Doppler sensors that allows for vehicle detection and help complete the power efficiency objective. Thus, when a vehicle is detected the light intensity of the lamps is increased to a preset level, so as not to affect road traffic safety, and reduced in the opposite case. Moreover, the system uses current sensors so as to allow for the identification of any possible malfunctions and thus facilitate the maintenance process. According to the obtained results, the system allows for an increased performance level and can be integrated in the Smart City concept.",
"title": ""
},
{
"docid": "493ad96590ee91fdfd68a4e59492dc55",
"text": "The 21st century will see a renewed focus on intermodal freight transportation driven by the changing requirements of global supply chains. Each of the transportation modes (air, inland water, ocean, pipeline, rail, and road) has gone through technological evolution and has functioned separately under a modally based regulatory structure for most of the 20th century. With the development of containerization in the mid-1900s, the reorientation toward deregulation near the end of the century, and a new focus on logistics and global supply chain requirements, the stage is set for continued intermodal transportation growth. The growth of intermodal freight transportation will be driven and challenged by four factors: (a) measuring, understanding, and responding to the role of intermodalism in the changing customer requirements and hypercompetition of supply chains in a global marketplace; (b) the need to reliably and flexibly respond to changing customer requirements with seamless and integrated coordination of freight and equipment flows through various modes; (c) knowledge of current and future intermodal operational options and alternatives, as well as the potential for improved information and communications technology and the challenges associated with their application; and (d) constraints on and coordination of infrastructure capacity, including policy and regulatory issues, as well as better management of existing infrastructure and broader considerations on future investment in new infrastructure.",
"title": ""
},
{
"docid": "ce0ba4696c26732ac72b346f72af7456",
"text": "OBJECTIVE\nThe purpose of this study was to examine the relationship between two forms of helping behavior among older adults--informal caregiving and formal volunteer activity.\n\n\nMETHODS\nTo evaluate our hypotheses, we employed Tobit regression models to analyze panel data from the first two waves of the Americans' Changing Lives survey.\n\n\nRESULTS\nWe found that older adult caregivers were more likely to be volunteers than noncaregivers. Caregivers who provided a relatively high number of caregiving hours annually reported a greater number of volunteer hours than did noncaregivers. Caregivers who provided care to nonrelatives were more likely than noncaregivers to be a volunteer and to volunteer more hours. Finally, caregivers were more likely than noncaregivers to be asked to volunteer.\n\n\nDISCUSSION\nOur results provide support for the hypothesis that caregivers are embedded in networks that provide them with more opportunities for volunteering. Additional research on the motivations for volunteering and greater attention to the context and hierarchy of caregiving and volunteering are needed.",
"title": ""
},
{
"docid": "ceb6ebab7d4902c6f27c261df996f4c1",
"text": "Depth map estimation is an important part of the multi-view video coding and virtual view synthesis within the free viewpoint video applications. However, computing an accurate depth map is a computationally complex process, which makes real-time implementation challenging. Alternatively, a simple estimation, though quick and promising for real-time processing, might result in inconsistent multi-view depth map sequences. To exploit this simplicity and to improve the quality of depth map estimation, we propose a novel content adaptive enhancement technique applied to the previously estimated multi-view depth map sequences. The enhancement method is locally adapted to edges, motion and depth-range of the scene to avoid blurring the synthesized views and to reduce the computational complexity. At the same time, and very importantly, the method enforces consistency across the spatial, temporal and inter-view dimensions of the depth maps so that both the coding efficiency and the quality of the synthesized views are improved. We demonstrate these improvements in the experiments, where the enhancement method is applied to several multi-view test sequences and the obtained synthesized views are compared to the views synthesized using other methods in terms of both numerical and perceived visual quality.",
"title": ""
},
{
"docid": "ff815f534ab19e79d46adaf8f579f01c",
"text": "Leveraging zero-shot learning to learn mapping functions between vector spaces of different languages is a promising approach to bilingual dictionary induction. However, methods using this approach have not yet achieved high accuracy on the task. In this paper, we propose a bridging approach, where our main contribution is a knowledge distillation training objective. As teachers, rich resource translation paths are exploited in this role. And as learners, translation paths involving low resource languages learn from the teachers. Our training objective allows seamless addition of teacher translation paths for any given low resource pair. Since our approach relies on the quality of monolingual word embeddings, we also propose to enhance vector representations of both the source and target language with linguistic information. Our experiments on various languages show large performance gains from our distillation training objective, obtaining as high as 17% accuracy improvements.",
"title": ""
},
{
"docid": "43101dd1a0b588dc773d1a917bff1f40",
"text": "This paper presents a new framework for human action recognition from a 3D skeleton sequence. Previous studies do not fully utilize the temporal relationships between video segments in a human action. Some studies successfully used very deep Convolutional Neural Network (CNN) models but often suffer from the data insufficiency problem. In this study, we first segment a skeleton sequence into distinct temporal segments in order to exploit the correlations between them. The temporal and spatial features of a skeleton sequence are then extracted simultaneously by utilizing a fine-to-coarse (F2C) CNN architecture optimized for human skeleton sequences. We evaluate our proposed method on NTU RGB+D and SBU Kinect Interaction dataset. It achieves 79.6% and 84.6% of accuracies on NTU RGB+D with cross-object and cross-view protocol, respectively, which are almost identical with the state-of-the-art performance. In addition, our method significantly improves the accuracy of the actions in two-person interactions.",
"title": ""
},
{
"docid": "b7c0864be28d70d49ae4a28fb7d78f04",
"text": "UNLABELLED\nThe replacement of crowns and bridges is a common procedure for many dental practitioners. When correctly planned and executed, fixed prostheses will provide predictable function, aesthetics and value for money. However, when done poorly, they are more likely to fail prematurely and lead to irreversible damage to the teeth and supporting structures beneath. Sound diagnosis, assessment and technical skills are essential when dealing with failed or failing fixed restorations. These skills are essential for the 21st century dentist. This paper, with treated clinical examples, illustrates the areas of technical skill and clinical decisions needed for this type of work. It also provides advice on how the risk of premature failure can, in general, be further reduced. The article also confirms the very real risk in the UK of dento-legal problems when patients experience unexpected problems with their crowns and bridges.\n\n\nCLINICAL RELEVANCE\nThis paper outlines clinical implications of failed fixed prosthodontics to the dental surgeon. It also discusses factors that we can all use to predict and reduce the risk of premature restoration failure. Restoration design, clinical execution and patient factors are the most frequent reasons for premature problems. It is worth remembering (and informing patients) that the health of the underlying supporting dental tissue is often irreversibly compromised at the time of fixed restoration failure.",
"title": ""
},
{
"docid": "829e437aee100b302f35900e0b0a91ab",
"text": "A 1. 5 V 0.18mum CMOS LNA for GPS applications has been designed with fully differential topology. Under such a low supply voltage, the fully differential LNA has been simulated, it provides a series of good results in Noise figure, Linearity and Power consumption. The LNA achieves a Noise figure of 1. 5 dB, voltage gain of 32 dB, Power dissipation of 6 mW, and the input reflection coefficient (Sn) is -23 dB.",
"title": ""
},
{
"docid": "bb6c42de5906f0f1d83f2be31c6c07e3",
"text": "Correlation is a very effective way to align intensity images. We extend the correlation technique to point set registration using a method we call kernel correlation. Kernel correlation is an affinity measure, and it is also a function of the point set entropy. We define the point set registration problem as finding the maximum kernel correlation configuration of the the two point sets to be registered. The new registration method has intuitive interpretations, simple to implement algorithm and easy to prove convergence property. Our method shows favorable performance when compared with the iterative closest point (ICP) and EM-ICP methods.",
"title": ""
},
{
"docid": "56a0b2f912718c097502c95349b279b9",
"text": "Using a relational DBMS as back-end engine for an XQuery processing system leverages relational query optimization and scalable query processing strategies provided by mature DBMS engines in the XML domain. Though a lot of theoretical work has been done in this area and various solutions have been proposed, no complete systems have been made available so far to give the practical evidence that this is a viable approach. In this paper, we describe the ourely relational XQuery processor Pathfinder that has been built on top of the extensible RDBMS MonetDB. Performance results indicate that the system is capable of evaluating XQuery queries efficiently, even if the input XML documents become huge. We additionally present further contributions such as loop-lifted staircase join, techniques to derive order properties and to reduce sorting effort in the generated relational algebra plans, as well as methods for optimizing XQuery joins, which, taken together, enabled us to reach our performance and scalability goals. 1998 ACM Computing Classification System: H.2.4, H.2.3, H.2.2, E.1",
"title": ""
},
{
"docid": "6da8710bf2429d2d0f5a66fb58918737",
"text": "T he great promise of surveys in which people report their own level of life satisfaction is that such surveys might provide a straightforward and easily collected measure of individual or national well-being that aggregates over the various components of well-being, such as economic status, health, family circumstances, and even human and political rights. Layard (2005) argues forcefully such measures do indeed achieve this end, providing measures of individual and aggregate happiness that should be the only gauges used to evaluate policy and progress. Such a position is in sharp contrast to the more widely accepted view, associated with Sen (1999), which is that human well-being depends on a range of functions and capabilities that enable people to lead a good life, each of which needs to be directly and objectively measured and which cannot, in general, be aggregated into a single summary measure. Which of life’s circumstances are important for life satisfaction, and which—if any—have permanent as opposed to merely transitory effects, has been the subject of lively debate. For economists, who usually assume that higher incomes represent a gain to the satisfaction of individuals, the role of income is of particular interest. It is often argued that income is both relatively unimportant and relatively transitory compared with family circumstances, unemployment, or health (for example, Easterlin, 2003). Comparing results from a given country over time, Easterlin (1974, 1995) famously noted that average national happiness does not increase over long spans of time, in spite of large increases in per capita income. These",
"title": ""
},
{
"docid": "51a2d48f43efdd8f190fd2b6c9a68b3c",
"text": "Textual passwords are often the only mechanism used to authenticate users of a networked system. Unfortunately, many passwords are easily guessed or cracked. In an attempt to strengthen passwords, some systems instruct users to create mnemonic phrase-based passwords. A mnemonic password is one where a user chooses a memorable phrase and uses a character (often the first letter) to represent each word in the phrase.In this paper, we hypothesize that users will select mnemonic phrases that are commonly available on the Internet, and that it is possible to build a dictionary to crack mnemonic phrase-based passwords. We conduct a survey to gather user-generated passwords. We show the majority of survey respondents based their mnemonic passwords on phrases that can be found on the Internet, and we generate a mnemonic password dictionary as a proof of concept. Our 400,000-entry dictionary cracked 4% of mnemonic passwords; in comparison, a standard dictionary with 1.2 million entries cracked 11% of control passwords. The user-generated mnemonic passwords were also slightly more resistant to brute force attacks than control passwords. These results suggest that mnemonic passwords may be appropriate for some uses today. However, mnemonic passwords could become more vulnerable in the future and should not be treated as a panacea.",
"title": ""
},
{
"docid": "9e8c61584bbbda83c73a4cb2f74f8d37",
"text": "Internet addiction (IA) has become a widespread and problematic phenomenon. Little is known about the effect of internet addiction (IA). The present study focus on the Meta analysis of internet addiction and its relation to mental health among youth. Effect size estimated the difference between the gender with respect to the severity of internet addiction and the depression, anxiety, social isolation and sleep pattern positive.",
"title": ""
}
] |
scidocsrr
|
35e9e15e78125f9407dea472d9050043
|
Effective search space reduction for spell correction using character neural embeddings
|
[
{
"docid": "3bda091d69af44f28cb3bd5893a5b8ef",
"text": "The method described assumes that a word which cannot be found in a dictionary has at most one error, which might be a wrong, missing or extra letter or a single transposition. The unidentified input word is compared to the dictionary again, testing each time to see if the words match—assuming one of these errors occurred. During a test run on garbled text, correct identifications were made for over 95 percent of these error types.",
"title": ""
}
] |
[
{
"docid": "55ca84497c465c236b309adc597fe3ad",
"text": "BACKGROUND\nSelf-myofascial release (SMFR) is a type of myofascial release performed by the individual themselves rather than by a clinician, typically using a tool.\n\n\nOBJECTIVES\nTo review the literature regarding studies exploring acute and chronic clinical effects of SMFR.\n\n\nMETHODS\nPubMed and Google Scholar databases were searched during February 2015 for studies containing words related to the topic of SMFR.\n\n\nRESULTS\nAcutely, SMFR seems to increase flexibility and reduce muscle soreness but does not impede athletic performance. It may lead to improved arterial function, improved vascular endothelial function, and increased parasympathetic nervous system activity acutely, which could be useful in recovery. There is conflicting evidence whether SMFR can improve flexibility long-term.\n\n\nCONCLUSION\nSMFR appears to have a range of potentially valuable effects for both athletes and the general population, including increasing flexibility and enhancing recovery.",
"title": ""
},
{
"docid": "e465b9a38e7649f541ab9e419103b362",
"text": "Spoken language based intelligent assistants (IAs) have been developed for a number of domains but their functionality has mostly been confined to the scope of a given app. One reason is that it’s is difficult for IAs to infer a user’s intent without access to relevant context and unless explicitly implemented, context is not available across app boundaries. We describe context-aware multi-app dialog systems that can learn to 1) identify meaningful user intents; 2) produce natural language representation for the semantics of such intents; and 3) predict user intent as they engage in multi-app tasks. As part of our work we collected data from the smartphones of 14 users engaged in real-life multi-app tasks. We found that it is reasonable to group tasks into high-level intentions. Based on the dialog content, IA can generate useful phrases to describe the intention. We also found that, with readily available contexts, IAs can effectively predict user’s intents during conversation, with accuracy at 58.9%.",
"title": ""
},
{
"docid": "513750b6909ae13f2ef54a361e476990",
"text": "OBJECTIVES\nFactors that influence the likelihood of readmission for chronic obstructive pulmonary disease (COPD) patients and the impact of posthospital care coordination remain uncertain. LACE index (L = length of stay, A = Acuity of admission; C = Charlson comorbidity index; E = No. of emergency department (ED) visits in last 6 months) is a validated tool for predicting 30-days readmissions for general medicine patients. We aimed to identify variables predictive of COPD readmissions including LACE index and determine the impact of a novel care management process on 30-day all-cause readmission rate.\n\n\nMETHODS\nIn a case-control design, potential readmission predictors including LACE index were analyzed using multivariable logistic regression for 461 COPD patients between January-October 2013. Patients with a high LACE index at discharge began receiving care coordination in July 2013. We tested for association between readmission and receipt of care coordination between July-October 2013. Care coordination consists of a telephone call from the care manager who: 1) reviews discharge instructions and medication reconciliation; 2) emphasizes importance of medication adherence; 3) makes a follow-up appointment with primary care physician within 1-2 weeks and; 4) makes an emergency back-up plan.\n\n\nRESULTS\nCOPD readmission rate was 16.5%. An adjusted LACE index of ≥ 13 was not associated with readmission (p = 0.186). Significant predictors included female gender (odds ratio [OR] 0.51, 95% confidence interval [CI] 0.29-0.91, p = 0.021); discharge to skilled nursing facility (OR 3.03, 95% CI 1.36-6.75, p = 0.007); 4-6 comorbid illnesses (OR 9.21, 95% CI 1.17-76.62, p = 0.035) and ≥ 4 ED visits in previous 6 months (OR 6.40, 95% CI 1.25-32.87, p = 0.026). Out of 119 patients discharged between July-October 2013, 41% received the care coordination. The readmission rate in the intervention group was 14.3% compared to 18.6% in controls (p = 0.62).\n\n\nCONCLUSIONS\nFactors influencing COPD readmissions are complex and poorly understood. LACE index did not predict 30-days all-cause COPD readmissions. Posthospital care coordination for transition of care from hospital to the community showed a 4.3% reduction in the 30-days all-cause readmission rate which did not reach statistical significance (p = 0.62).",
"title": ""
},
{
"docid": "1ca5d4ba5591dbc2c6c2044c19be2ffb",
"text": "Distractor generation is a crucial step for fill-in-the-blank question generation. We propose a generative model learned from training generative adversarial nets (GANs) to create useful distractors. Our method utilizes only context information and does not use the correct answer, which is completely different from previous Ontology-based or similarity-based approaches. Trained on the Wikipedia corpus, the proposed model is able to predict Wiki entities as distractors. Our method is evaluated on two biology question datasets collected from Wikipedia and actual college-level exams. Experimental results show that our context-based method achieves comparable performance to a frequently used word2vec-based method for the Wiki dataset. In addition, we propose a second-stage learner to combine the strengths of the two methods, which further improves the performance on both datasets, with 51.7% and 48.4% of generated distractors being acceptable.",
"title": ""
},
{
"docid": "cdcfd25cd84870b51297ec776c8fa447",
"text": "This paper aims at the construction of a music composition system that generates 16-bars musical works by interaction between human and the system, using interactive genetic algorithm. The present system generates not only various kinds of melody parts but also various kinds of patterns of backing parts and tones of all parts, so that users can acquire satisfied musical work. The users choose generating mode of musical work from three points, i.e., melody part, tones of all parts, or patterns of backing parts, and the users evaluate impressions of presented candidates of musical work through the user interface. The present system generates the candidates based on user's subjective evaluation. This paper shows evaluation experiments to confirm the usefulness of the present system.",
"title": ""
},
{
"docid": "60c8c222e19d27b40b2b5dc99a588d6c",
"text": "In this work, we propose a feature exploration method for learning-based cuffless blood pressure measurement. More specifically, to efficiently explore a large feature space from the photoplethysmography signal, we have applied several analytical techniques, including random error elimination, adaptive outlier removal, maximum information coefficient and Pearson's correlation coefficient based feature assessment methods. We evaluate fifty-seven possible feature candidates and propose three separate feature sets with each containing eleven features to predict the systolic blood pressure (SBP), diastolic blood pressure (DBP) and mean blood pressure (MBP), respectively. From our experimental results on a realistic dataset, this work achieves 4.77±7.68, 3.67±5.69 and 3.85±5.87 mmHg prediction accuracy for SBP, DBP and MBP. In summary, using the proposed light-weight features, the proposed predictors can successfully achieve a Grade A in two standards proposed by the American National Standards of the Association for the Advancement of Medical Instrumentation (AAMI) and British Hypertension Society (BHS).",
"title": ""
},
{
"docid": "d12e9664d73b29b43c650a8606ec7e2b",
"text": "As autonomous agents proliferate in the real world, both in software and robotic settings, they will increasingly need to band together for cooperative activities with previously unfamiliar teammates. In such ad hoc team settings, team strategies cannot be developed a priori. Rather, an agent must be prepared to cooperate with many types of teammates: it must collaborate without pre-coordination. This paper challenges the AI community to develop theory and to implement prototypes of ad hoc team agents. It defines the concept of ad hoc team agents, specifies an evaluation paradigm, and provides examples of possible theoretical and empirical approaches to challenge. The goal is to encourage progress towards this ambitious, newly realistic, and increasingly important research goal.",
"title": ""
},
{
"docid": "7b63daa48a700194f04293542c83bb20",
"text": "BACKGROUND\nPresent treatment strategies for rheumatoid arthritis include use of disease-modifying antirheumatic drugs, but a minority of patients achieve a good response. We aimed to test the hypothesis that an improved outcome can be achieved by employing a strategy of intensive outpatient management of patients with rheumatoid arthritis--for sustained, tight control of disease activity--compared with routine outpatient care.\n\n\nMETHODS\nWe designed a single-blind, randomised controlled trial in two teaching hospitals. We screened 183 patients for inclusion. 111 were randomly allocated either intensive management or routine care. Primary outcome measures were mean fall in disease activity score and proportion of patients with a good response (defined as a disease activity score <2.4 and a fall in this score from baseline by >1.2). Analysis was by intention-to-treat.\n\n\nFINDINGS\nOne patient withdrew after randomisation and seven dropped out during the study. Mean fall in disease activity score was greater in the intensive group than in the routine group (-3.5 vs -1.9, difference 1.6 [95% CI 1.1-2.1], p<0.0001). Compared with routine care, patients treated intensively were more likely to have a good response (definition, 45/55 [82%] vs 24/55 [44%], odds ratio 5.8 [95% CI 2.4-13.9], p<0.0001) or be in remission (disease activity score <1.6; 36/55 [65%] vs 9/55 [16%], 9.7 [3.9-23.9], p<0.0001). Three patients assigned routine care and one allocated intensive management died during the study; none was judged attributable to treatment.\n\n\nINTERPRETATION\nA strategy of intensive outpatient management of rheumatoid arthritis substantially improves disease activity, radiographic disease progression, physical function, and quality of life at no additional cost.",
"title": ""
},
{
"docid": "55861c73dda7c01f12a8a6f756a74e29",
"text": "Strategies for extracting the three-phase reference currents for shunt active power filters are compared, evaluating their performance under different source and load conditions with the new IEEE Standard 1459 power definitions. The study was applied to a three-phase four-wire system in order to include imbalance. Under balanced and sinusoidal voltages, harmonic cancellation and reactive power compensation can be attained in all the methods. However, when the voltages are distorted and/or unbalanced, the compensation capabilities are not equivalent, with some strategies unable to yield an adequate solution when the mains voltages are not ideal. Simulation and experimental results are included",
"title": ""
},
{
"docid": "fe18b85af942d35b4e4ec1165e2e63c3",
"text": "The retrofitting of existing buildings to resist the seismic loads is very important to avoid losing lives or financial disasters. The aim at retrofitting processes is increasing total structure strength by increasing stiffness or ductility ratio. In addition, the response modification factors (R) have to satisfy the code requirements for suggested retrofitting types. In this study, two types of jackets are used, i.e. full reinforced concrete jackets and surrounding steel plate jackets. The study is carried out on an existing building in Madinah by performing static pushover analysis before and after retrofitting the columns. The selected model building represents nearly all-typical structure lacks structure built before 30 years ago in Madina City, KSA. The comparison of the results indicates a good enhancement of the structure respect to the applied seismic forces. Also, the response modification factor of the RC building is evaluated for the studied cases before and after retrofitting. The design of all vertical elements (columns) is given. The results show that the design of retrofitted columns satisfied the code's design stress requirements. However, for some retrofitting types, the ductility requirements represented by response modification factor do not satisfy KSA design code (SBC301). Keywords—Concrete jackets, steel jackets, RC buildings pushover analysis, non-linear analysis.",
"title": ""
},
{
"docid": "c02cf08af76c24a71de17ae2f3ac1b00",
"text": "Clustering analysis is one of the most used Machine Learning techniques to discover groups among data objects. Some clustering methods require the number of clusters into which the data is going to be partitioned. There exist several cluster validity indices that help us to approximate the optimal number of clusters of the dataset. However, such indices are not suitable to deal with Big Data due to its size limitation and runtime costs. This paper presents two clustering validity indices that handle large amount of data in low computational time. Our indices are based on redefinitions of traditional indices by simplifying the intra-cluster distance calculation. Two types of tests have been carried out over 28 synthetic datasets to analyze the performance of the proposed indices. First, we test the indices with small and medium size datasets to verify that our indices have a similar effectiveness to the traditional ones. Subsequently, tests on datasets of up to 11 million records and 20 features have been executed to check their efficiency. The results show that both indices can handle Big Data in a very low computational time with an effectiveness similar to the traditional indices using Apache Spark framework.",
"title": ""
},
{
"docid": "fd338a7f607b121f20640cd8a1d590fa",
"text": "The performance of adaptive acoustic echo cancelers AEC is sensitive to the nonstationarity and correlation of speech signals. In this paper, we explore a new approach based on an adaptive AEC driven by data hidden in speech, to enhance the AEC robustness. We propose a two-stage AEC, where the first stage is a classical NLMS-based AEC driven by the far-end speech. In the signal, we embed—in an extended conception of data hiding—an imperceptible white and stationary signal, i.e., a watermark. The goal of the second stage AEC is to identify the misalignment of the first stage. It is driven by the watermark solely and takes advantage of its appropriate properties stationary and white to improve the robustness of the two-stage AEC to the nonstationarity and correlation of speech, and thus reduce the overall system misadjustment. We test two kinds of implementations: in the first implementation, referred to as adaptive watermark driven AEC A-WdAEC, the watermark is a white stationary Gaussian noise. Driven by this signal, the second stage converges faster than the classical AEC and provides better performance in steady state. In the second implementation, referred to as maximum length sequences WdAEC MLS-WdAEC, the watermark is built from MLS. Thus, the second stage performs a block identification of the first stage misalignment, given by the circular correlation watermark/preprocessed version of the first stage residual echo. The advantage of this implementation lies in its robustness against noise and undermodeling. Simulation results show the relevance of the “WdAEC” approach, compared to the classical “error-driven AEC.”",
"title": ""
},
{
"docid": "b02dcd4d78f87d8ac53414f0afd8604b",
"text": "This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector.",
"title": ""
},
{
"docid": "0b08e657d012d26310c88e2129c17396",
"text": "In order to accurately determine the growth of greenhouse crops, the system based on AVR Single Chip microcontroller and wireless sensor networks is developed, it transfers data through the wireless transceiver devices without setting up electric wiring, the system structure is simple. The monitoring and management center can control the temperature and humidity of the greenhouse, measure the carbon dioxide content, and collect the information about intensity of illumination, and so on. In addition, the system adopts multilevel energy memory. It combines energy management with energy transfer, which makes the energy collected by solar energy batteries be used reasonably. Therefore, the self-managing energy supply system is established. The system has advantages of low power consumption, low cost, good robustness, extended flexible. An effective tool is provided for monitoring and analysis decision-making of the greenhouse environment.",
"title": ""
},
{
"docid": "401dd92fd39f4f04a67f684e73c9c210",
"text": "We propose an approach to generate images of people given a desired appearance and pose. Disentangled representations of pose and appearance are necessary to handle the compound variability in the resulting generated images. Hence, we develop an approach based on intermediate representations of poses and appearance: our pose-guided appearance rendering network firstly encodes the targets’ poses using an encoder-decoder neural network. Then the targets’ appearances are encoded by learning adaptive appearance filters using a fully convolutional network. Finally, these filters are placed in the encoder-decoder neural networks to complete the rendering. We demonstrate that our model can generate images and videos that are superior to state-of-the-art methods, and can handle pose guided appearance rendering in both image and video generation.",
"title": ""
},
{
"docid": "146f1cd30a8f99e692cbd3e11d7245b0",
"text": "Record linkage has received significant attention in recent years due to the plethora of data sources that have to be integrated to facilitate data analyses. In several cases, such an integration involves disparate data sources containing huge volumes of records and must be performed in near real-time in order to support critical applications. In this paper, we propose the first summarization algorithms for speeding up online record linkage tasks. Our first method, called SkipBloom, summarizes efficiently the participating data sets, using their blocking keys, to allow for very fast comparisons among them. The second method, called BlockSketch, summarizes a block to achieve a constant number of comparisons for a submitted query record, during the matching phase. Additionally, we extend BlockSketch to adapt its functionality to streaming data, where the objective is to use a constant amount of main memory to handle potentially unbounded data sets. Through extensive experimental evaluation, using three real-world data sets, we demonstrate the superiority of our methods against two state-of-the-art algorithms for online record linkage.",
"title": ""
},
{
"docid": "a961b8851761575ae9b54684c58aa30d",
"text": "We propose an optical wireless indoor localization using light emitting diodes (LEDs) and demonstrate it via simulation. Unique frequency addresses are assigned to each LED lamp, and transmitted through the light radiated by the LED. Using the phase difference, time difference of arrival (TDOA) localization algorithm is employed. Because the proposed localization method used pre-installed LED ceiling lamps, no additional infrastructure for localization is required to install and therefore, inexpensive system can be realized. The performance of the proposed localization method is evaluated by computer simulation, and the indoor location accuracy is less than 1 cm in the space of 5m x 5 m x 3 m.",
"title": ""
},
{
"docid": "9e953bdc98bc87398c37a62b0ec295c9",
"text": "● Compare and contrast nursing and non-nursing health promotion theories. ● Examine health promotion theories for consistency with accepted health promotion priorities and values. ● Articulate how health promotion theories move the profession forward. ● Discuss strengths and limitations associated with each health promotion theory or model. ● Describe the difference between a model and a theory. ● Identify theoretical assumptions and concepts within nursing and non-nursing theories. ● Develop his or her own health promotion model.",
"title": ""
},
{
"docid": "4b74b9d4c4b38082f9f667e363f093b2",
"text": "We have developed Textpresso, a new text-mining system for scientific literature whose capabilities go far beyond those of a simple keyword search engine. Textpresso's two major elements are a collection of the full text of scientific articles split into individual sentences, and the implementation of categories of terms for which a database of articles and individual sentences can be searched. The categories are classes of biological concepts (e.g., gene, allele, cell or cell group, phenotype, etc.) and classes that relate two objects (e.g., association, regulation, etc.) or describe one (e.g., biological process, etc.). Together they form a catalog of types of objects and concepts called an ontology. After this ontology is populated with terms, the whole corpus of articles and abstracts is marked up to identify terms of these categories. The current ontology comprises 33 categories of terms. A search engine enables the user to search for one or a combination of these tags and/or keywords within a sentence or document, and as the ontology allows word meaning to be queried, it is possible to formulate semantic queries. Full text access increases recall of biological data types from 45% to 95%. Extraction of particular biological facts, such as gene-gene interactions, can be accelerated significantly by ontologies, with Textpresso automatically performing nearly as well as expert curators to identify sentences; in searches for two uniquely named genes and an interaction term, the ontology confers a 3-fold increase of search efficiency. Textpresso currently focuses on Caenorhabditis elegans literature, with 3,800 full text articles and 16,000 abstracts. The lexicon of the ontology contains 14,500 entries, each of which includes all versions of a specific word or phrase, and it includes all categories of the Gene Ontology database. Textpresso is a useful curation tool, as well as search engine for researchers, and can readily be extended to other organism-specific corpora of text. Textpresso can be accessed at http://www.textpresso.org or via WormBase at http://www.wormbase.org.",
"title": ""
},
{
"docid": "1aa89c7b8be417345d78d1657d5f487f",
"text": "This paper proposes a new novel snubberless current-fed half-bridge front-end isolated dc/dc converter-based inverter for photovoltaic applications. It is suitable for grid-tied (utility interface) as well as off-grid (standalone) application based on the mode of control. The proposed converter attains clamping of the device voltage by secondary modulation, thus eliminating the need of snubber or active-clamp. Zero-current switching or natural commutation of primary devices and zero-voltage switching of secondary devices is achieved. Soft-switching is inherent owing to the proposed secondary modulation and is maintained during wide variation in voltage and power transfer capacity and thus is suitable for photovoltaic (PV) applications. Primary device voltage is clamped at reflected output voltage, and secondary device voltage is clamped at output voltage. Steady-state operation and analysis, and design procedure are presented. Simulation results using PSIM 9.0 are given to verify the proposed analysis and design. An experimental converter prototype rated at 200 W has been designed, built, and tested in the laboratory to verify and demonstrate the converter performance over wide variations in input voltage and output power for PV applications. The proposed converter is a true isolated boost converter and has higher voltage conversion (boost) ratio compared to the conventional active-clamped converter.",
"title": ""
}
] |
scidocsrr
|
8f1a1967cf14a9f225a4c4097ceb230a
|
Are we ready for autonomous driving? The KITTI vision benchmark suite
|
[
{
"docid": "c9b6f91a7b69890db88b929140f674ec",
"text": "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.",
"title": ""
}
] |
[
{
"docid": "2ce4d585edd54cede6172f74cf9ab8bb",
"text": "Enterprise resource planning (ERP) systems have been widely implemented by numerous firms throughout the industrial world. While success stories of ERP implementation abound due to its potential in resolving the problem of fragmented information, a substantial number of these implementations fail to meet the goals of the organization. Some are abandoned altogether and others contribute to the failure of an organization. This article seeks to identify the critical factors of ERP implementation and uses statistical analysis to further delineate the patterns of adoption of the various concepts. A cross-sectional mail survey was mailed to business executives who have experience in the implementation of ERP systems. The results of this study provide empirical evidence that the theoretical constructs of ERP implementation are followed at varying levels. It offers some fresh insights into the current practice of ERP implementation. In addition, this study fills the need for ERP implementation constructs that can be utilized for further study of this important topic.",
"title": ""
},
{
"docid": "569a7cfcf7dd4cc5132dc7ffa107bfcf",
"text": "We present the results of a study of definite descriptions use in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretation. We ran two experiments, in which subjects were asked to classify the uses of definite descriptions in a corpus of 33 newspaper articles, containing a total of 1412 definite descriptions. We measured the agreement among annotators about the classes assigned to definite descriptions, as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the text. Themost interesting result of this study from a corpus annotation perspective was the rather low agreement (K=0.63) that we obtained using versions of Hawkins’ and Prince’s classification schemes; better results (K=0.76) were obtained using the simplified scheme proposed by Fraurud that includes only two classes, first-mention and subsequent-mention. The agreement about antecedents was also not complete. These findings raise questions concerning the strategy of evaluating systems for definite description interpretation by comparing their results with a standardized annotation. From a linguistic point of view, the most interesting observations were the great number of discourse-newdefinites in our corpus (in one of our experiments, about 50% of the definites in the collection were classified as discourse-new, 30% as anaphoric, and 18% as associative/bridging) and the presence of definites which did not seem to require a complete disambiguation. This paper will appear in Computational Linguistics.",
"title": ""
},
{
"docid": "a4e8edda99a01f79372a43f2eebcca1f",
"text": "Autophagy occurs prior to apoptosis and plays an important role in cell death regulation during spinal cord injury (SCI). This study aimed to determine the effects and potential mechanism of the glucagon-like peptide-1 (GLP-1) agonist extendin-4 (Ex-4) in SCI. Seventy-two male Sprague Dawley rats were randomly assigned to sham, SCI, 2.5 μg Ex-4, and 10 μg Ex-4 groups. To induce SCI, a 10-g iron rod was dropped from a 20-mm height to the spinal cord surface. Ex-4 was administered via intraperitoneal injection immediately after surgery. Motor function evaluation with the Basso Beattie Bresnahan (BBB) locomotor rating scale indicated significantly increased scores (p < 0.01) in the Ex-4-treated groups, especially 10 μg, which demonstrated the neuroprotective effect of Ex-4 after SCI. The light chain 3-II (LC3-II) and Beclin 1 protein expression determined via western blot and the number of autophagy-positive neurons via immunofluorescence double labeling were increased by Ex-4, which supports promotion of autophagy (p < 0.01). The caspase-3 protein level and neuronal apoptosis via transferase UTP nick end labeling (TUNEL)/NeuN/DAPI double labeling were significantly reduced in the Ex-4-treated groups, which indicates anti-apoptotic effects (p < 0.01). Finally, histological assessment via Nissl staining demonstrated the Ex-4 groups exhibited a significantly greater number of surviving neurons and less cavity (p < 0.01). To our knowledge, this is the first study to indicate that Ex-4 significantly enhances motor function in rats after SCI, and these effects are associated with the promotion of autophagy and inhibition of apoptosis.",
"title": ""
},
{
"docid": "c2ed9f4fa8059b70387505225d5d7c21",
"text": "Accurate positioning systems can be realized via ultra-wideband signals due to their high time resolution. In this article, position estimation is studied for UWB systems. After a brief introduction to UWB signals and their positioning applications, two-step positioning systems are investigated from a UWB perspective. It is observed that time-based positioning is well suited for UWB systems. Then time-based UWB ranging is studied in detail, and the main challenges, theoretical limits, and range estimation algorithms are presented. Performance of some practical time-based ranging algorithms is investigated and compared against the maximum likelihood estimator and the theoretical limits. The trade-off between complexity and accuracy is observed.",
"title": ""
},
{
"docid": "0e5fc650834d883e291c2cf4ace91d35",
"text": "The majority of practitioners express software requirements using natural text notations such as user stories. Despite the readability of text, it is hard for people to build an accurate mental image of the most relevant entities and relationships. Even converting requirements to conceptual models is not sufficient: as the number of requirements and concepts grows, obtaining a holistic view of the requirements becomes increasingly difficult and, eventually, practically impossible. In this paper, we introduce and experiment with a novel, automated method for visualizing requirements—by showing the concepts the text references and their relationships—at different levels of granularity. We build on two pillars: (i) clustering techniques for grouping elements into coherent sets so that a simplified overview of the concepts can be created, and (ii) state-of-the-art, corpus-based semantic relatedness algorithms between words to measure the extent to which two concepts are related. We build a proof-of-concept tool and evaluate our approach by applying it to requirements from four real-world data sets.",
"title": ""
},
{
"docid": "3ec3285a2babcd3a00b453956dda95aa",
"text": "Microblog normalisation methods often utilise complex models and struggle to differentiate between correctly-spelled unknown words and lexical variants of known words. In this paper, we propose a method for constructing a dictionary of lexical variants of known words that facilitates lexical normalisation via simple string substitution (e.g. tomorrow for tmrw). We use context information to generate possible variant and normalisation pairs and then rank these by string similarity. Highlyranked pairs are selected to populate the dictionary. We show that a dictionary-based approach achieves state-of-the-art performance for both F-score and word error rate on a standard dataset. Compared with other methods, this approach offers a fast, lightweight and easy-to-use solution, and is thus suitable for high-volume microblog pre-processing. 1 Lexical Normalisation A staggering number of short text “microblog” messages are produced every day through social media such as Twitter (Twitter, 2011). The immense volume of real-time, user-generated microblogs that flows through sites has been shown to have utility in applications such as disaster detection (Sakaki et al., 2010), sentiment analysis (Jiang et al., 2011; González-Ibáñez et al., 2011), and event discovery (Weng and Lee, 2011; Benson et al., 2011). However, due to the spontaneous nature of the posts, microblogs are notoriously noisy, containing many non-standard forms — e.g., tmrw “tomorrow” and 2day “today” — which degrade the performance of natural language processing (NLP) tools (Ritter et al., 2010; Han and Baldwin, 2011). To reduce this effect, attempts have been made to adapt NLP tools to microblog data (Gimpel et al., 2011; Foster et al., 2011; Liu et al., 2011b; Ritter et al., 2011). An alternative approach is to pre-normalise non-standard lexical variants to their standard orthography (Liu et al., 2011a; Han and Baldwin, 2011; Xue et al., 2011; Gouws et al., 2011). For example, se u 2morw!!! would be normalised to see you tomorrow! The normalisation approach is especially attractive as a preprocessing step for applications which rely on keyword match or word frequency statistics. For example, earthqu, eathquake, and earthquakeee — all attested in a Twitter corpus — have the standard form earthquake; by normalising these types to their standard form, better coverage can be achieved for keyword-based methods, and better word frequency estimates can be obtained. In this paper, we focus on the task of lexical normalisation of English Twitter messages, in which out-of-vocabulary (OOV) tokens are normalised to their in-vocabulary (IV) standard form, i.e., a standard form that is in a dictionary. Following other recent work on lexical normalisation (Liu et al., 2011a; Han and Baldwin, 2011; Gouws et al., 2011; Liu et al., 2012), we specifically focus on one-to-one normalisation in which one OOV token is normalised to one IV word. Naturally, not all OOV words in microblogs are lexical variants of IV words: named entities, e.g., are prevalent in microblogs, but not all named entities are included in our dictionary. One challenge for lexical normalisation is therefore to dis-",
"title": ""
},
{
"docid": "3f98e2683b83a7312dc4dd6bf1f717aa",
"text": "How do comments on student writing from peers compare to those from subject-matter experts? This study examined the types of comments that reviewers produce as well as their perceived helpfulness. Comments on classmates’ papers were collected from two undergraduate and one graduate-level psychology course. The undergraduate papers in one of the courses were also commented on by an independent psychology instructor experienced in providing feedback to students on similar writing tasks. The comments produced by students at both levels were shorter than the instructor’s. The instructor’s comments were predominantly directive and rarely summative. The undergraduate peers’ comments were more mixed in type; directive and praise comments were the most frequent. Consistently, undergraduate peers found directive and praise comments helpful. The helpfulness of the directive comments was also endorsed by a writing expert.",
"title": ""
},
{
"docid": "698abf5788520934edfbee8f74154825",
"text": "A near-regular texture deviates geometrically and photometrically from a regular congruent tiling. Although near-regular textures are ubiquitous in the man-made and natural world, they present computational challenges for state of the art texture analysis and synthesis algorithms. Using regular tiling as our anchor point, and with user-assisted lattice extraction, we can explicitly model the deformation of a near-regular texture with respect to geometry, lighting and color. We treat a deformation field both as a function that acts on a texture and as a texture that is acted upon, and develop a multi-modal framework where each deformation field is subject to analysis, synthesis and manipulation. Using this formalization, we are able to construct simple parametric models to faithfully synthesize the appearance of a near-regular texture and purposefully control its regularity.",
"title": ""
},
{
"docid": "f64e65df9db7219336eafb20d38bf8cf",
"text": "With predictions that this nursing shortage will be more severe and have a longer duration than has been previously experienced, traditional strategies implemented by employers will have limited success. The aging nursing workforce, low unemployment, and the global nature of this shortage compound the usual factors that contribute to nursing shortages. For sustained change and assurance of an adequate supply of nurses, solutions must be developed in several areas: education, healthcare deliver systems, policy and regulations, and image. This shortage is not solely nursing's issue and requires a collaborative effort among nursing leaders in practice and education, health care executives, government, and the media. This paper poses several ideas of solutions, some already underway in the United States, as a catalyst for readers to initiate local programs.",
"title": ""
},
{
"docid": "10514cb40ed8adc9fb59e12cb0cf3fe9",
"text": "Crossover recombination is a crucial process in plant breeding because it allows plant breeders to create novel allele combnations on chromosomes that can be used for breeding superior F1 hybrids. Gaining control over this process, in terms of increasing crossover incidence, altering crossover positions on chromosomes or silencing crossover formation, is essential for plant breeders to effectively engineer the allelic composition of chromosomes. We review the various means of crossover control that have been described or proposed. By doing so, we sketch a field of science that uses both knowledge from classic literature and the newest discoveries to manage the occurrence of crossovers for a variety of breeding purposes.",
"title": ""
},
{
"docid": "b5b45aa1badbda386b12830c78909693",
"text": "BACKGROUND\nThe healthcare industry has become increasingly dependent on using information technology (IT) to manage its daily operations. Unexpected downtime of health IT systems could therefore wreak havoc and result in catastrophic consequences. Little is known, however, regarding the nature of failures of health IT.\n\n\nOBJECTIVE\nTo analyze historical health IT outage incidents as a means to better understand health IT vulnerabilities and inform more effective prevention and emergency response strategies.\n\n\nMETHODS\nWe studied news articles and incident reports publicly available on the internet describing health IT outage events that occurred in China. The data were qualitatively analyzed using a deductive grounded theory approach based on a synthesized IT risk model developed in the domain of information systems.\n\n\nRESULTS\nA total of 116 distinct health IT incidents were identified. A majority of them (69.8%) occurred in the morning; over 50% caused disruptions to the patient registration and payment collection functions of the affected healthcare facilities. The outpatient practices in tertiary hospitals seem to be particularly vulnerable to IT failures. Software defects and overcapacity issues, followed by malfunctioning hardware, were among the principal causes.\n\n\nCONCLUSIONS\nUnexpected health IT downtime occurs more and more often with the widespread adoption of electronic systems in healthcare. Risk identification and risk assessments are essential steps to developing preventive measures. Equally important is institutionalization of contingency plans as our data show that not all failures of health IT can be predicted and thus effectively prevented. The results of this study also suggest significant future work is needed to systematize the reporting of health IT outage incidents in order to promote transparency and accountability.",
"title": ""
},
{
"docid": "d97b2b028fbfe0658e841954958aac06",
"text": "Videogame control interfaces continue to evolve beyond their traditional roots, with devices encouraging more natural forms of interaction growing in number and pervasiveness. Yet little is known about their true potential for intuitive use. This paper proposes methods to leverage existing intuitive interaction theory for games research, specifically by examining different types of naturally mapped control interfaces for videogames using new measures for previous player experience. Three commercial control devices for a racing game were categorised using an existing typology, according to how the interface maps physical control inputs with the virtual gameplay actions. The devices were then used in a within-groups (n=64) experimental design aimed at measuring differences in intuitive use outcomes. Results from mixed design ANOVA are discussed, along with implications for the field.",
"title": ""
},
{
"docid": "768a8cfff3f127a61f12139466911a94",
"text": "The metabolism of NAD has emerged as a key regulator of cellular and organismal homeostasis. Being a major component of both bioenergetic and signaling pathways, the molecule is ideally suited to regulate metabolism and major cellular events. In humans, NAD is synthesized from vitamin B3 precursors, most prominently from nicotinamide, which is the degradation product of all NAD-dependent signaling reactions. The scope of NAD-mediated regulatory processes is wide including enzyme regulation, control of gene expression and health span, DNA repair, cell cycle regulation and calcium signaling. In these processes, nicotinamide is cleaved from NAD(+) and the remaining ADP-ribosyl moiety used to modify proteins (deacetylation by sirtuins or ADP-ribosylation) or to generate calcium-mobilizing agents such as cyclic ADP-ribose. This review will also emphasize the role of the intermediates in the NAD metabolome, their intra- and extra-cellular conversions and potential contributions to subcellular compartmentalization of NAD pools.",
"title": ""
},
{
"docid": "630baadcd861e58e0f36ba3a4e52ffd2",
"text": "The handshake gesture is an important part of the social etiquette in many cultures. It lies at the core of many human interactions, either in formal or informal settings: exchanging greetings, offering congratulations, and finalizing a deal are all activities that typically either start or finish with a handshake. The automated detection of a handshake can enable wide range of pervasive computing scanarios; in particular, different types of information can be exchanged and processed among the handshaking persons, depending on the physical/logical contexts where they are located and on their mutual acquaintance. This paper proposes a novel handshake detection system based on body sensor networks consisting of a resource-constrained wrist-wearable sensor node and a more capable base station. The system uses an effective collaboration technique among body sensor networks of the handshaking persons which minimizes errors associated with the application of classification algorithms and improves the overall accuracy in terms of the number of false positives and false negatives.",
"title": ""
},
{
"docid": "7c85a62d9fd756f729b01024256d9728",
"text": "WiFi are easily available almost everywhere nowadays. Due to this, there is increasing interest in harnessing this technology for purposes other than communication. Therefore, this research was carried out with the main idea of using WiFi in developing an efficient, low cost control system for small office home office (SOHO) indoor environment. The main objective of the research is to develop a proof of concept that WiFi received signal strength indicator (RSSI) can be harnessed and used to develop a control system. The control system basically will help to save energy in an intelligent manner with a very minimum cost for the controller circuit. There are two main parts in the development of the system. First is extracting the RSSI monitoring feed information and analyzing it for designing the control system. The second is the development of the controller circuit for real environment. The simple yet inexpensive controller was tested in an indoor environment and results showed successful operation of the circuit developed.",
"title": ""
},
{
"docid": "7ea22f7a29d045e24414f74b2b2f5f72",
"text": "Belief Spans • A text span to record Informable Slots and Requestable Slots to enable a RNN to decode it. For example: <Inf>Chinese; Expensive<\\Inf> <Req>Address; Phone<\\Req> • Roles: knowledge base search and response conditioning. Sequicity Formalization • Source-target sequence pair: {B0R0U1, B1R1}, {B1R1U2, B2R2}, ..., {Bt-1Rt-1Ut, BtRt}. • Two-stage decoding: – Bt=Seq2seq(Bt-1Rt-1Ut) – Rt=Seq2seq(Bt-1Rt-1Ut|Bt, KB search results)",
"title": ""
},
{
"docid": "a0850b5f8b2d994b50bb912d6fca3dfb",
"text": "In this paper we describe the development of an accurate, smallfootprint, large vocabulary speech recognizer for mobile devices. To achieve the best recognition accuracy, state-of-the-art deep neural networks (DNNs) are adopted as acoustic models. A variety of speedup techniques for DNN score computation are used to enable real-time operation on mobile devices. To reduce the memory and disk usage, on-the-fly language model (LM) rescoring is performed with a compressed n-gram LM. We were able to build an accurate and compact system that runs well below real-time on a Nexus 4 Android phone.",
"title": ""
},
{
"docid": "e4a1f577cb232f6f76fba149a69db58f",
"text": "During software development, the activities of requirements analysis, functional specification, and architectural design all require a team of developers to converge on a common vision of what they are developing. There have been remarkably few studies of conceptual design during real projects. In this paper, we describe a detailed field study of a large industrial software project. We observed the development team's conceptual design activities for three months with follow-up observations and discussions over the following eight months. In this paper, we emphasize the organization of the project and how patterns of collaboration affected the team's convergence on a common vision. Three observations stand out: First, convergence on a common vision was not only painfully slow but was punctuated by several reorientations of direction; second, the design process seemed to be inherently forgetful, involving repeated resurfacing of previously discussed issues; finally, a conflict of values persisted between team members responsible for system development and those responsible for overseeing the development process. These findings have clear implications for collaborative support tools and process interventions.",
"title": ""
},
{
"docid": "0eea36947d6cfcf1e064f84c89b0e68c",
"text": "Recently, large-scale knowledge bases have been constructed by automatically extracting relational facts from text. Unfortunately, most of the current knowledge bases focus on static facts and ignore the temporal dimension. However, the vast majority of facts are evolving with time or are valid only during a particular time period. Thus, time is a significant dimension that should be included in knowledge bases.\n In this paper, we introduce a complete information extraction framework that harvests temporal facts and events from semi-structured data and free text of Wikipedia articles to create a temporal ontology. First, we extend a temporal data representation model by making it aware of events. Second, we develop an information extraction method which harvests temporal facts and events from Wikipedia infoboxes, categories, lists, and article titles in order to build a temporal knowledge base. Third, we show how the system can use its extracted knowledge for further growing the knowledge base.\n We demonstrate the effectiveness of our proposed methods through several experiments. We extracted more than one million temporal facts with precision over 90% for extraction from semi-structured data and almost 70% for extraction from text.",
"title": ""
},
{
"docid": "671bcd8c52fd6ad3cb2806ffa0cedfda",
"text": "In this paper we present a class of soft-robotic systems with superior load bearing capacity and expanded degrees of freedom. Spatial parallel soft robotic systems utilize spatial arrangement of soft actuators in a manner similar to parallel kinematic machines. In this paper we demonstrate that such an arrangement of soft actuators enhances stiffness and yield dramatic motions. The current work utilizes tri-chamber actuators made from silicone rubber to demonstrate the viability of the concept.",
"title": ""
}
] |
scidocsrr
|
3d4318fec634cdeb89fd92b0ec8e1cbf
|
Efficient distance-based outlier detection on uncertain datasets of Gaussian distribution
|
[
{
"docid": "19d4662287a5c3ce1cef85fa601b74ba",
"text": "This paper compares two approaches in identifying outliers in multivariate datasets; Mahalanobis distance (MD) and robust distance (RD). MD has been known suffering from masking and swamping effects and RD is an approach that was developed to overcome problems that arise in MD. There are two purposes of this paper, first is to identify outliers using MD and RD and the second is to show that RD performs better than MD in identifying outliers. An observation is classified as an outlier if MD or RD is larger than a cut-off value. Outlier generating model is used to generate a set of data and MD and RD are computed from this set of data. The results showed that RD can identify outliers better than MD. However, in non-outliers data the performance for both approaches are similar. The results for RD also showed that RD can identify multivariate outliers much better when the number of dimension is large.",
"title": ""
},
{
"docid": "f5168565306f6e7f2b36ef797a6c9de8",
"text": "We study the problem of clustering data objects whose locations are uncertain. A data object is represented by an uncertainty region over which a probability density function (pdf) is defined. One method to cluster uncertain objects of this sort is to apply the UK-means algorithm, which is based on the traditional K-means algorithm. In UK-means, an object is assigned to the cluster whose representative has the smallest expected distance to the object. For arbitrary pdf, calculating the expected distance between an object and a cluster representative requires expensive integration computation. We study various pruning methods to avoid such expensive expected distance calculation.",
"title": ""
}
] |
[
{
"docid": "3023637fd498bb183dae72135812c304",
"text": "computational method for its solution. A Psychological Description of LSA as a Theory of Learning, Memory, and Knowledge We give a more complete description of LSA as a mathematical model later when we use it to simulate lexical acquisition. However, an overall outline is necessary to understand a roughly equivalent psychological theory we wish to present first. The input to LSA is a matrix consisting of rows representing unitary event types by columns representing contexts in which instances of the event types appear. One example is a matrix of unique word types by many individual paragraphs in which the words are encountered, where a cell contains the number of times that a particular word type, say model, appears in a particular paragraph, say this one. After an initial transformation of the cell entries, this matrix is analyzed by a statistical technique called singular value decomposition (SVD) closely akin to factor analysis, which allows event types and individual contexts to be re-represented as points or vectors in a high dimensional abstract space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or con-space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or contexts (e.g., word-word, word-paragraph, or paragraph-paragraph similarities). Psychologically, the data that the model starts with are raw, first-order co-occurrence relations between stimuli and the local contexts or episodes in which they occur. The stimuli or event types may be thought of as unitary chunks of perception or memory. The first-order process by which initial pairwise associations are entered and transformed in LSA resembles classical conditioning in that it depends on contiguity or co-occurrence, but weights the result first nonlinearly with local occurrence frequency, then inversely with a function of the number of different contexts in which the particular component is encountered overall and the extent to which its occurrences are spread evenly over contexts. However, there are possibly important differences in the details as currently implemented; in particular, LSA associations are symmetrical; a context is associated with the individual events it contains by the same cell entry as the events are associated with the context. This would not be a necessary feature of the model; it would be possible to make the initial matrix asymmetrical, with a cell indicating the co-occurrence relation, for example, between a word and closely following words. Indeed, Lund and Burgess (in press; Lund, Burgess, & Atchley, 1995), and SchUtze (1992a, 1992b), have explored related models in which such data are the input. The first step of the LSA analysis is to transform each cell entry from the number of times that a word appeared in a particular context to the log of that frequency. This approximates the standard empirical growth functions of simple learning. The fact that this compressive function begins anew with each context also yields a kind of spacing effect; the association of A and B is greater if both appear in two different contexts than if they each appear twice in one context. In a second transformation, all cell entries for a given word are divided by the entropy for that word, Z p log p over all its contexts. Roughly speaking, this step accomplishes much the same thing as conditioning rules such as those described by Rescorla & Wagner (1972), in that it makes the primary association better represent the informative relation between the entities rather than the mere fact that they occurred together. Somewhat more formally, the inverse entropy measure estimates the degree to which observing the occurrence of a component specifies what context it is in; the larger the entropy of, say, a word, the less information its observation transmits about the places it has occurred, so the less usage-defined meaning it acquires, and conversely, the less the meaning of a particular context is determined by containing the word. It is interesting to note that automatic information retrieval methods (including LSA when used for the purpose) are greatly improved by transformations of this general form, the present one usually appearing to be the best (Harman, 1986). It does not seem far-fetched to believe that the necessary transform for good information retrieval, retrieval that brings back text corresponding to what a person has in mind when the person offers one or more query words, corresponds to the functional relations in basic associative processes. Anderson (1990) has drawn attention to the analogy between information retrieval in external systems and those in the human mind. It is not clear which way the relationship goes. Does information retrieval in automatic systems work best when it mimics the circumstances that make people think two things are related, or is there a general logic that tends to make them have similar forms? In automatic information retrieval the logic is usually assumed to be that idealized searchers have in mind exactly the same text as they would like the system to find and draw the words in 2 Although this exploratory process takes some advantage of chance, there is no reason why any number of dimensions should be much better than any other unless some mechanism like the one proposed is at work. In all cases, the model's remaining parameters were fitted only to its input (training) data and not to the criterion (generalization) test. THE LATENT SEMANTIC ANALYSIS THEORY OF KNOWLEDGE 217 their queries from that text (see Bookstein & Swanson, 1974). Then the system's challenge is to estimate the probability that each text in its store is the one that the searcher was thinking about. This characterization, then, comes full circle to the kind of communicative agreement model we outlined above: The sender issues a word chosen to express a meaning he or she has in mind, and the receiver tries to estimate the probability of each of the sender's possible messages. Gallistel (1990), has argued persuasively for the need to separate local conditioning or associative processes from global representation of knowledge. The LSA model expresses such a separation in a very clear and precise way. The initial matrix after transformation to log frequency divided by entropy represents the product of the local or pairwise processes? The subsequent analysis and dimensionality reduction takes all of the previously acquired local information and turns it into a unified representation of knowledge. Thus, the first processing step of the model, modulo its associational symmetry, is a rough approximation to conditioning or associative processes. However, the model's next steps, the singular value decomposition and dimensionality optimization, are not contained as such in any extant psychological theory of learning, although something of the kind may be hinted at in some modem discussions of conditioning and, on a smaller scale and differently interpreted, is often implicit and sometimes explicit in many neural net and spreading-activation architectures. This step converts the transformed associative data into a condensed representation. The condensed representation can be seen as achieving several things, although they are at heart the result of only one mechanism. First, the re-representation captures indirect, higher-order associations. That is, jf a particular stimulus, X, (e.g., a word) has been associated with some other stimulus, Y, by being frequently found in joint context (i.e., contiguity), and Y is associated with Z, then the condensation can cause X and Z to have similar representations. However, the strength of the indirect XZ association depends on much more than a combination of the strengths of XY and YZ. This is because the relation between X and Z also depends, in a wellspecified manner, on the relation of each of the stimuli, X, Y, and Z, to every other entity in the space. In the past, attempts to predict indirect associations by stepwise chaining rules have not been notably successful (see, e.g., Pollio, 1968; Young, 1968). If associations correspond to distances in space, as supposed by LSA, stepwise chaining rules would not be expected to work well; if X is two units from Y and Y is two units from Z, all we know about the distance from X to Z is that it must be between zero and four. But with data about the distances between X, Y, Z, and other points, the estimate of XZ may be greatly improved by also knowing XY and YZ. An alternative view of LSA's effects is the one given earlier, the induction of a latent higher order similarity structure (thus its name) among representations of a large collection of events. Imagine, for example, that every time a stimulus (e.g., a word) is encountered, the distance between its representation and that of every other stimulus that occurs in close proximity to it is adjusted to be slightly smaller. The adjustment is then allowed to percolate through the whole previously constructed structure of relations, each point pulling on its neighbors until all settle into a compromise configuration (physical objects, weather systems, and Hopfield nets do this too; Hopfield, 1982). It is easy to see that the resulting relation between any two representations depends not only on direct experience with them but with everything else ever experienced. Although the current mathematical implementation of LSA does not work in this incremental way, its effects are much the same. The question, then, is whether such a mechanism, when combined with the statistics of experience, produces a faithful reflection of human knowledge. Finally, to anticipate what is developed later, the computational scheme used by LSA for combining and condensing local information into a common",
"title": ""
},
{
"docid": "3acc77360d13c47d16dadc886a34f51e",
"text": "Background: Because of the high-speed development of e-commerce, online group buying has become a new popular pattern of consumption for Chinese consumers. Previous research has studied online group-buying (OGB) purchase intention in some specific areas such as Taiwan, but in mainland China. Purpose: The purpose of this study is to contribute to the Technology Acceptance Model, incorporating other potential driving factors to address how they influence Chinese consumers' online group-buying purchase intentions. Method: The study uses two steps to achieve its purpose. The first step is that I use the focus group interview technique to collect primary data. The results combining the Technology Acceptance model help me propose hypotheses. The second step is that the questionnaire method is applied for empirical data collection. The constructs are validated with exploratory factor analysis and reliability analysis, and then the model is tested with Linear multiple regression. Findings: The results have shown that the adapted research model has been successfully tested in this study. The seven factors (perceived usefulness, perceived ease of use, price, e-trust, Word of Mouth, website quality and perceived risk) have significant effects on Chinese consumers' online group-buying purchase intentions. This study suggests that managers of group-buying websites need to design easy-to-use platform for users. Moreover, group-buying website companies need to propose some rules or regulations to protect consumers' rights. When conflicts occur, evendors can follow these rules to provide solutions that are reasonable and satisfying for consumers.",
"title": ""
},
{
"docid": "92cafadc922255249108ce4a0dad9b98",
"text": "Generative Adversarial Networks (GAN) have attracted much research attention recently, leading to impressive results for natural image generation. However, to date little success was observed in using GAN generated images for improving classification tasks. Here we attempt to explore, in the context of car license plate recognition, whether it is possible to generate synthetic training data using GAN to improve recognition accuracy. With a carefully-designed pipeline, we show that the answer is affirmative. First, a large-scale image set is generated using the generator of GAN, without manual annotation. Then, these images are fed to a deep convolutional neural network (DCNN) followed by a bidirectional recurrent neural network (BRNN) with long short-term memory (LSTM), which performs the feature learning and sequence labelling. Finally, the pre-trained model is fine-tuned on real images. Our experimental results on a few data sets demonstrate the effectiveness of using GAN images: an improvement of 7.5% over a strong baseline with moderate-sized real data being available. We show that the proposed framework achieves competitive recognition accuracy on challenging test datasets. We also leverage the depthwise separate convolution to construct a lightweight convolutional RNN, which is about half size and 2× faster on CPU. Combining this framework and the proposed pipeline, we make progress in performing accurate recognition on mobile and embedded devices.",
"title": ""
},
{
"docid": "fe4726395624786c5a6327c5b9eba7b3",
"text": "BACKGROUND\nTopical oestrogen and manual separation are the main treatments for labial adhesions. The aim was to evaluate treatment of labial adhesions and compare the outcome of topical oestrogen treatment with that of manual separation.\n\n\nMETHOD\nAll girls aged 0-12 years admitted to a tertiary centre for paediatric surgery for labial adhesions were included. The study design was dual: The first part was a retrospective chart review of the treatment success according to the medical charts. The second part was a cross-sectional parent-reported long-term outcome study (> 6 months after last treatment finished).\n\n\nRESULTS\nIn total 71 patients were included and the median follow-up time for the chart study was 84 (6-162) months after treatment with oestrogen or manual separation. Oestrogen was the first treatment for 66 patients who had an initial successful rate of 62% but this was followed by recurrences in 44%. Five patients had manual treatment as their first treatment and they had a 100% initial success rate followed by recurrences in 20%. Therefore, for the first treatment course there was a final success rate of 35% for oestrogen and 80% for manual separation (p = 0.006). Corresponding final success rates including all consecutive treatments over the study period were 46/130 (35%) for oestrogen and 21/30 (70%) for manual separation (p = 0.001). The success rate for oestrogen did not differ if treatment was given in a course length of 0-4 weeks (39% success) or > 4 weeks (32% success) (p = 0.369). In the parent-reported long-term outcome study the response rate was 51% (36/71). Parents reported that recurrences of adhesions after last prescribed/performed treatment were frequent: in total 25% of patients still had adhesions corresponding to 8/29 (29%) of those whose last treatment was oestrogen and 1/9 (11%) of those whose last treatment was manual separation.\n\n\nCONCLUSION\nDue to the results recurrences are common after both oestrogen and manual separations. However, the overall final outcome after manual separation seems to be more successful when compared to that of topical oestrogen treatment.",
"title": ""
},
{
"docid": "677f5e0ca482bf7ea7bf929ae3adbf76",
"text": "Multilevel modulation formats, such as PAM-4, have been introduced in recent years for next generation wireline communication systems for more efficient use of the available link bandwidth. High-speed ADCs with digital signal processing (DSP) can provide robust performance for such systems to compensate for the severe channel impairment as the data rate continues to increase.",
"title": ""
},
{
"docid": "631a5765b1685f8884e82d3a1d0d6341",
"text": "The recent deployment of very large-scale camera networks has led to a unique version of the tracking problem whose goal is to detect and track every vehicle within a large urban area. To address this problem we exploit constraints inherent in urban environments (i.e. while there are often many vehicles, they follow relatively consistent paths) to create novel visual processing tools that are highly efficient in detecting cars in a fixed scene and at connecting these detections into partial tracks.We derive extensions to a network flow based probabilistic data association model to connect these tracks between cameras. Our real time system is evaluated on a large set of ground-truthed traffic videos collected by a network of seven cameras in a dense urban scene.",
"title": ""
},
{
"docid": "49c7b5cab51301d8b921fa87d6c0b1ff",
"text": "We introduce the input output automa ton a simple but powerful model of computation in asynchronous distributed networks With this model we are able to construct modular hierarchical correct ness proofs for distributed algorithms We de ne this model and give an interesting example of how it can be used to construct such proofs",
"title": ""
},
{
"docid": "5b6bf9ee0fed37b20d4b3607717d2f77",
"text": "In order to understand the organization of the cerebral cortex, it is necessary to create a map or parcellation of cortical areas. Reconstructions of the cortical surface created from structural MRI scans, are frequently used in neuroimaging as a common coordinate space for representing multimodal neuroimaging data. These meshes are used to investigate healthy brain organization as well as abnormalities in neurological and psychiatric conditions. We frame cerebral cortex parcellation as a mesh segmentation task, and address it by taking advantage of recent advances in generalizing convolutions to the graph domain. In particular, we propose to assess graph convolutional networks and graph attention networks, which, in contrast to previous mesh parcellation models, exploit the underlying structure of the data to make predictions. We show experimentally on the Human Connectome Project dataset that the proposed graph convolutional models outperform current state-ofthe-art and baselines, highlighting the potential and applicability of these methods to tackle neuroimaging challenges, paving the road towards a better characterization of brain diseases.",
"title": ""
},
{
"docid": "80cccd3f325c8bd9e91854a82f39bbbe",
"text": "In this paper new fast algorithms for erosion, dilation, propagation and skeletonization are presented. The key principle of the algorithms is to process object contours. A queue is implemented to store the contours in each iteration for the next iteration. The contours can be passed from one operation to another as well. Contour filling and object labelling become available by minor modifications of the basic operations. The time complexity of the algorithms is linear with the number of contour elements to be processed. The algorithms prove to be faster than any other known algorithms..",
"title": ""
},
{
"docid": "a4099a526548c6d00a91ea21b9f2291d",
"text": "The robust principal component analysis (robust PCA) problem has been considered in many machine learning applications, where the goal is to decompose the data matrix to a low rank part plus a sparse residual. While current approaches are developed by only considering the low rank plus sparse structure, in many applications, side information of row and/or column entities may also be given, and it is still unclear to what extent could such information help robust PCA. Thus, in this paper, we study the problem of robust PCA with side information, where both prior structure and features of entities are exploited for recovery. We propose a convex problem to incorporate side information in robust PCA and show that the low rank matrix can be exactly recovered via the proposed method under certain conditions. In particular, our guarantee suggests that a substantial amount of low rank matrices, which cannot be recovered by standard robust PCA, become recoverable by our proposed method. The result theoretically justifies the effectiveness of features in robust PCA. In addition, we conduct synthetic experiments as well as a real application on noisy image classification to show that our method also improves the performance in practice by exploiting side information.",
"title": ""
},
{
"docid": "40bc405aaec0fd8563de84e163091325",
"text": "The extremely tight binding between biotin and avidin or streptavidin makes labeling proteins with biotin a useful tool for many applications. BirA is the Escherichia coli biotin ligase that site-specifically biotinylates a lysine side chain within a 15-amino acid acceptor peptide (also known as Avi-tag). As a complementary approach to in vivo biotinylation of Avi-tag-bearing proteins, we developed a protocol for producing recombinant BirA ligase for in vitro biotinylation. The target protein was expressed as both thioredoxin and MBP fusions, and was released from the corresponding fusion by TEV protease. The liberated ligase was separated from its carrier using HisTrap HP column. We obtained 24.7 and 27.6 mg BirA ligase per liter of culture from thioredoxin and MBP fusion constructs, respectively. The recombinant enzyme was shown to be highly active in catalyzing in vitro biotinylation. The described protocol provides an effective means for making BirA ligase that can be used for biotinylation of different Avi-tag-bearing substrates.",
"title": ""
},
{
"docid": "586ba74140fb7f68cc7c5b0990fb7352",
"text": "Hotel companies are struggling to keep up with the rapid consumer adoption of social media. Although many companies have begun to develop social media programs, the industry has yet to fully explore the potential of this emerging data and communication resource. The revenue management department, as it evolves from tactical inventory management to a more expansive role across the organization, is poised to be an early adopter of the opportunities afforded by social media. We propose a framework for evaluating social media-related revenue management opportunities, discuss the issues associated with leveraging these opportunities and propose a roadmap for future research in this area. Journal of Revenue and Pricing Management (2011) 10, 293–305. doi:10.1057/rpm.2011.12; published online 6 May 2011",
"title": ""
},
{
"docid": "38192c65a2b9819b9e2ccba15f7d6706",
"text": "Many applications pointed to the informative potential of the human eyes. In this paper we investigate the possibility of estimating the cognitive process used by a person when addressing a mental challenge, according to the Eye Accessing Cue (EAC) model from the Neuro-Linguistic Programming (NLP) theory [3]. This model states that there is a subtle, yet firm, connection between the non-visual gaze direction and the mental representation system used. From the point of view of computer vision, this work deals with gaze estimation under passive illumination. Using a multistage fusion approach, we show that it is possible to achieve highly accurate results in both terms of eye gaze localization or EAC case recognition.",
"title": ""
},
{
"docid": "935c404529b02cee2620e52f7a09b84d",
"text": "We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: 1) learning the inverse kinematics in a highly-redundant robotic arm, 2) learning omnidirectional locomotion with motor primitives in a quadruped robot, 3) an arm learning to control a fishing rod with a flexible wire. We show that 1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; 2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; 3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.",
"title": ""
},
{
"docid": "f70447a47fb31fc94d6b57ca3ef57ad3",
"text": "BACKGROUND\nOn Aug 14, 2014, the US Food and Drug Administration approved the antiangiogenesis drug bevacizumab for women with advanced cervical cancer on the basis of improved overall survival (OS) after the second interim analysis (in 2012) of 271 deaths in the Gynecologic Oncology Group (GOG) 240 trial. In this study, we report the prespecified final analysis of the primary objectives, OS and adverse events.\n\n\nMETHODS\nIn this randomised, controlled, open-label, phase 3 trial, we recruited patients with metastatic, persistent, or recurrent cervical carcinoma from 81 centres in the USA, Canada, and Spain. Inclusion criteria included a GOG performance status score of 0 or 1; adequate renal, hepatic, and bone marrow function; adequately anticoagulated thromboembolism; a urine protein to creatinine ratio of less than 1; and measurable disease. Patients who had received chemotherapy for recurrence and those with non-healing wounds or active bleeding conditions were ineligible. We randomly allocated patients 1:1:1:1 (blocking used; block size of four) to intravenous chemotherapy of either cisplatin (50 mg/m2 on day 1 or 2) plus paclitaxel (135 mg/m2 or 175 mg/m2 on day 1) or topotecan (0·75 mg/m2 on days 1-3) plus paclitaxel (175 mg/m2 on day 1) with or without intravenous bevacizumab (15 mg/kg on day 1) in 21 day cycles until disease progression, unacceptable toxic effects, voluntary withdrawal by the patient, or complete response. We stratified randomisation by GOG performance status (0 vs 1), previous radiosensitising platinum-based chemotherapy, and disease status (recurrent or persistent vs metastatic). We gave treatment open label. Primary outcomes were OS (analysed in the intention-to-treat population) and adverse events (analysed in all patients who received treatment and submitted adverse event information), assessed at the second interim and final analysis by the masked Data and Safety Monitoring Board. The cutoff for final analysis was 450 patients with 346 deaths. This trial is registered with ClinicalTrials.gov, number NCT00803062.\n\n\nFINDINGS\nBetween April 6, 2009, and Jan 3, 2012, we enrolled 452 patients (225 [50%] in the two chemotherapy-alone groups and 227 [50%] in the two chemotherapy plus bevacizumab groups). By March 7, 2014, 348 deaths had occurred, meeting the prespecified cutoff for final analysis. The chemotherapy plus bevacizumab groups continued to show significant improvement in OS compared with the chemotherapy-alone groups: 16·8 months in the chemotherapy plus bevacizumab groups versus 13·3 months in the chemotherapy-alone groups (hazard ratio 0·77 [95% CI 0·62-0·95]; p=0·007). Final OS among patients not receiving previous pelvic radiotherapy was 24·5 months versus 16·8 months (0·64 [0·37-1·10]; p=0·11). Postprogression OS was not significantly different between the chemotherapy plus bevacizumab groups (8·4 months) and chemotherapy-alone groups (7·1 months; 0·83 [0·66-1·05]; p=0·06). Fistula (any grade) occurred in 32 (15%) of 220 patients in the chemotherapy plus bevacizumab groups (all previously irradiated) versus three (1%) of 220 in the chemotherapy-alone groups (all previously irradiated). Grade 3 fistula developed in 13 (6%) versus one (<1%). No fistulas resulted in surgical emergencies, sepsis, or death.\n\n\nINTERPRETATION\nThe benefit conferred by incorporation of bevacizumab is sustained with extended follow-up as evidenced by the overall survival curves remaining separated. After progression while receiving bevacizumab, we did not observe a negative rebound effect (ie, shorter survival after bevacizumab is stopped than after chemotherapy alone is stopped). These findings represent proof-of-concept of the efficacy and tolerability of antiangiogenesis therapy in advanced cervical cancer.\n\n\nFUNDING\nNational Cancer Institute.",
"title": ""
},
{
"docid": "fac9465df30dd5d9ba5bc415b2be8172",
"text": "In the Railway System, Railway Signalling System is the vital control equipment responsible for the safe operation of trains. In Railways, the system of communication from railway stations and running trains is by the means of signals through wired medium. Once the train leaves station, there is no communication between the running train and the station or controller. Hence, in case of failures or in emergencies in between stations, immediate information cannot be given and a particular problem will escalate with valuable time lost. Because of this problem only a single train can run in between two nearest stations. Now a days, Railway all over the world is using Optical Fiber cable for communication between stations and to send signals to trains. The usage of optical fibre cables does not lend itself for providing trackside communication as in the case of copper cable. Hence, another transmission medium is necessary for communication outside the station limits with drivers, guards, maintenance gangs, gateman etc. Obviously the medium of choice for such communication is wireless. With increasing speed and train density, adoption of train control methods such as Automatic warning system, (AWS) or, Automatic train stop (ATS), or Positive train separation (PTS) is a must. Even though, these methods traditionally pick up their signals from track based beacons, Wireless Sensor Network based systems will suit the Railways much more. In this paper, we described a new and innovative medium for railways that is Wireless Sensor Network (WSN) based Railway Signalling System and conclude that Introduction of WSN in Railways will not only achieve economy but will also improve the level of safety and efficiency of train operations.",
"title": ""
},
{
"docid": "39b2c607c29c21d86b8d250886725ab3",
"text": "Central auditory processing disorder (CAPD) may be viewed as a multidimensional entity with far-reaching communicative, educational, and psychosocial implications for which differential diagnosis not only is possible but also is essential to an understanding of its impact and to the development of efficacious, deficit-specific management plans. This paper begins with a description of some behavioral central auditory assessment tools in current clinical use. Four case studies illustrate the utility of these tools in clarifying the nature of auditory difficulties. Appropriate treatment options that flow logically from the diagnoses are given in each case. The heterogeneity of the population presenting with auditory processing problems, not unexpected based on this model, is made clear, as is the clinical utility of central auditory tests in the transdisciplinary assessment and management of children's language and learning difficulties.",
"title": ""
},
{
"docid": "0b2c5629cdf3e8de592cfe600de92360",
"text": "Correlation is a robust and general technique for pattern recognition and is used in many applications, such as automatic target recognition, biometric recognition and optical character recognition. The design, analysis, and use of correlation pattern recognition algorithms require background information, including linear systems theory, random variables and processes, matrix/vector methods, detection and estimation theory, digital signal processing, and optical processing. This book provides a needed review of this diverse background material and develops the signal processing theory, the pattern recognition metrics, and the practical application know-how from basic premises. It shows both digital and optical implementations. It also contains state-of-the-art technology presented by the team that developed it and includes case studies of significant current interest, such as face and target recognition. It is suitable for advanced undergraduate or graduate students taking courses in pattern recognition theory, whilst reaching technical levels of interest to the professional practitioner.",
"title": ""
},
{
"docid": "d0d3ea7c5497070ca2a7e9f904a3c515",
"text": "Fairness in algorithmic decision-making processes is attracting increasing concern. When an algorithm is applied to human-related decisionmaking an estimator solely optimizing its predictive power can learn biases on the existing data, which motivates us the notion of fairness in machine learning. while several different notions are studied in the literature, little studies are done on how these notions affect the individuals. We demonstrate such a comparison between several policies induced by well-known fairness criteria, including the color-blind (CB), the demographic parity (DP), and the equalized odds (EO). We show that the EO is the only criterion among them that removes group-level disparity. Empirical studies on the social welfare and disparity of these policies are conducted.",
"title": ""
},
{
"docid": "b1272039194d07ff9b7568b7f295fbfb",
"text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.",
"title": ""
}
] |
scidocsrr
|
ac38ece6e34b61f356030a88522696a7
|
Deep auto-encoder neural networks in reinforcement learning
|
[
{
"docid": "6777525c8b57cc14f38fa1d528b30dce",
"text": "Batch reinforcement learning methods provide a powerful framework for learning efficiently and effectively in autonomous robots. The paper reviews some recent work of the authors aiming at the successful application of reinforcement learning in a challenging and complex domain. It discusses several variants of the general batch learning framework, particularly tailored to the use of multilayer perceptrons to approximate value functions over continuous state spaces. The batch learning framework is successfully used to learn crucial skills in our soccer-playing robots participating in the RoboCup competitions. This is demonstrated on three different case studies.",
"title": ""
},
{
"docid": "274a88ca3f662b6250d856148389b078",
"text": "This paper introduces NFQ, an algorithm for efficient and effective training of a Q-value function represented by a multi-layer perceptron. Based on the principle of storing and reusing transition experiences, a model-free, neural network based Reinforcement Learning algorithm is proposed. The method is evaluated on three benchmark problems. It is shown empirically, that reasonably few interactions with the plant are needed to generate control policies of high quality.",
"title": ""
}
] |
[
{
"docid": "8cf8727c31a8bc888a23b82eee1d7dfc",
"text": "Low stiffness elements have a number of applications in Soft Robotics, from Series Elastic Actuators (SEA) to torque sensors for compliant systems.",
"title": ""
},
{
"docid": "aa80419c97d4461d602528def066f26b",
"text": "Rheumatoid arthritis (RA) is a chronic inflammatory disease characterized by synovial inflammation that can lead to structural damage of cartilage, bone and tendons. Assessing the inflammatory activity and the severity is essential in RA to help rheumatologists in adopting proper therapeutic strategies and in evaluating disease outcome and response to treatment. In the last years musculoskeletal (MS) ultrasonography (US) underwent tremendous technological development of equipment with increased sensitivity in detecting a wide set of joint and soft tissues abnormalities. In RA MSUS with the use of Doppler modalities is a useful imaging tool to depict inflammatory abnormalities (i.e. synovitis, tenosynovitis and bursitis) and structural changes (i.e. bone erosions, cartilage damage and tendon lesions). In addition, MSUS has been demonstrated to be able to monitor the response to different therapies in RA to guide local diagnostic and therapeutic procedures such as biopsy, fluid aspirations and injections. Future applications based on the development of new tools may improve the role of MSUS in RA.",
"title": ""
},
{
"docid": "faf51b31266d44d1ee023bbfb5d0ca81",
"text": "Prepubescent boys are, if anything, more likely than girls to be depressed. During adolescence, however, a dramatic shift occurs: between the ages of 11 and 13 years, this trend in depression rates is reversed. By 15 years of age, females are approximately twice as likely as males to have experienced an episode of depression, and this gender gap persists for the next 35 to 40 years. We offer a theoretical framework that addresses the timing of this phenomenon. First, we discuss the social and hormonal mechanisms that stimulate affiliative needs for females at puberty. Next, we describe how heightened affiliative need can interact with adolescent transition difficulties to create a depressogenic diathesis as at-risk females reach puberty. This gender-linked vulnerability explains why adolescent females are more likely than males to become depressed when faced with negative life events and, particularly, life events with interpersonal consequences.",
"title": ""
},
{
"docid": "b042f6478ef34f4be8ee9b806ddf6011",
"text": "By using an extensive framework for e-learning enablers and disablers (including 37 factors) this paper sets out to identify which of these challenges are most salient for an e-learning course in Sri Lanka. The study includes 1887 informants and data has been collected from year 2004 to 2007, covering opinions of students and staff. A quantitative approach is taken to identify the most important factors followed by a qualitative analysis to explain why and how they are important. The study identified seven major challenges in the following areas: Student support, Flexibility, Teaching and Learning Activities, Access, Academic confidence, Localization and Attitudes. In this paper these challenges will be discussed and solutions suggested.",
"title": ""
},
{
"docid": "8f21eee8a4320baebe0fe40364f6580e",
"text": "The dup system related subjects others recvfrom and user access methods. The minimal facilities they make up. A product before tackling 'the design, decisions they probably should definitely. Multiplexer'' interprocess communication in earlier addison wesley has the important features a tutorial. Since some operating system unstructured devices a process init see. At berkeley software in earlier authoritative technical information on write operations. The lowest unused multiprocessor support for, use this determination. No name dot spelled with the system. Later it a file several, reasons often single user interfacesis excluded except.",
"title": ""
},
{
"docid": "ff9e0e5c2bb42955d3d29db7809414a1",
"text": "We present a novel methodology for the automated detection of breast lesions from dynamic contrast-enhanced magnetic resonance volumes (DCE-MRI). Our method, based on deep reinforcement learning, significantly reduces the inference time for lesion detection compared to an exhaustive search, while retaining state-of-art accuracy. This speed-up is achieved via an attention mechanism that progressively focuses the search for a lesion (or lesions) on the appropriate region(s) of the input volume. The attention mechanism is implemented by training an artificial agent to learn a search policy, which is then exploited during inference. Specifically, we extend the deep Q-network approach, previously demonstrated on simpler problems such as anatomical landmark detection, in order to detect lesions that have a significant variation in shape, appearance, location and size. We demonstrate our results on a dataset containing 117 DCE-MRI volumes, validating run-time and accuracy of lesion detection.",
"title": ""
},
{
"docid": "60fbaecc398f04bdb428ccec061a15a5",
"text": "A decade earlier, work on modeling and analyzing social network, was primarily focused on manually collected datasets where the friendship links were sparse but relatively noise free (i.e. all links represented strong physical relation). With the popularity of online social networks, the notion of “friendship” changed dramatically. The data collection, now although automated, contains dense friendship links but the links contain noisier information (i.e. some weaker relationships). The aim of this study is to identify these weaker links and suggest how these links (identification) play a vital role in improving social media design elements such as privacy control, detection of auto-bots, friend introductions, information prioritization and so on. The binary metric used so far for modeling links in social network (i.e. friends or not) is of little importance as it groups all our relatives, close friends and acquaintances in the same category. Therefore a popular notion of tie-strength has been incorporated for modeling links. In this paper, a predictive model is presented that helps evaluate tie-strength for each link in network based on transactional features (e.g. communication, file transfer, photos). The model predicts tie strength with 76.4% efficiency. This work also suggests that important link properties manifest similarly across different social media sites.",
"title": ""
},
{
"docid": "cc4548925973baa6220ad81082a93c86",
"text": "Usually benefits for transportation investments are analysed within a framework of cost-benefit analysis or its related techniques such as financial analysis, cost-effectiveness analysis, life-cycle costing, economic impact analysis, and others. While these tools are valid techniques in general, their application to intermodal transportation would underestimate the overall economic impact by missing important aspects of productivity enhancement. Intermodal transportation is an example of the so-called general purpose technologies (GPTs) that are characterized by statistically significant spillover effects. Diffusion, secondary innovations, and increased demand for specific human capital are basic features of GPTs. Eventually these features affect major macroeconomic variables, especially productivity. Recent economic literature claims that in order to study GPTs, micro and macro evidence should be combined to establish a better understanding of the connecting mechanisms from the micro level to the overall performance of an economy or the macro level. This study analyses these issues with respect to intermodal transportation. The goal is to understand the basic micro and macro mechanisms behind intermodal transportation in order to further develop a rigorous framework for evaluation of benefits from intermodal transportation. In doing so, lessons from computer simulation of the basic features of intermodal transportation are discussed and conclusions are made regarding an agenda for work in the field. 1 Dr. Yuri V. Yevdokimov, Assistant Professor of Economics and Civil Engineering, University of New Brunswick, Canada, Tel. (506) 447-3221, Fax (506) 453-4514, E-mail: [email protected] Introduction Intermodal transportation can be thought of as a process for transporting freight and passengers by means of a system of interconnected networks, involving various combinations of modes of transportation, in which all of the components are seamlessly linked and efficiently combined. Intermodal transportation is rapidly gaining acceptance as an integral component of the systems approach of conducting business in an increasingly competitive and interdependent global economy. For example, the United States Code with respect to transportation states: AIt is the policy of the United States Government to develop a National Intermodal Transportation System that is economically efficient and environmentally sound, provides the foundation for the United States to compete in the global economy and will move individuals and property in an energy efficient way. The National Intermodal Transportation System shall consist of all forms of transportation in a unified, interconnected manner, including the transportation systems of the future, to reduce energy consumption and air pollution while promoting economic development and supporting the United States= pre-eminent position in international commerce.@ (49 USC, Ch. 55, Sec. 5501, 1998) David Collenette (1997), the Transport Minister of Canada, noted: AWith population growth came development, and the relative advantages and disadvantages of the different modes changed as the transportation system became more advanced.... Intermodalism today is about safe, efficient transportation by the most appropriate combination of modes.@ (The Summit on North American Intermodal Transportation, 1997) These statements define intermodal transportation as a macroeconomic concept, because an effective transportation system is a vital factor in assuring the efficiency of an economic system as a whole. Moreover, intermodal transportation is an important socio-economic phenomenon which implies that the benefits of intermodal transportation have to be evaluated at the macroeconomic level, or at least at the regional level, involving all elements of the economic system that gain from having a more efficient transportation network in place. Defining Economic Benefits of Intermodal Transportation Traditionally, the benefits of a transportation investment have been primarily evaluated through reduced travel time and reduced vehicle maintenance and operation costs. However, according to Weisbrod and Treyz (1998), such methods underestimate the total benefits of transportation investment by Amissing other important aspects of productivity enhancement.@ It is so because transportation does not have an intrinsic purpose in itself and is rather intended to enable other economic activities such as production, consumption, leisure, and dissemination of knowledge to take place. Hence, in order to measure total economic benefits of investing in intermodal transportation, it is necessary to understand their basic relationships with different economic activities. Eventually, improvements in transportation reduce transportation costs. The immediate benefit of the reduction is the fall in total cost of production in an economic system under study which results in growth of the system=s output. This conclusion has been known in economic development literature since Tinbergen=s paper in 1957 (Tinbergen, 1957). However, the literature does not explicitly identify why transportation costs will fall. This issue is addressed in this discussion with respect to intermodal transportation. Transportation is a multiple service to multiple users. It is produced in transportation networks that provide infrastructure for economic activities. It appears that transportation networks have economies of scale. As discussed below, intermodal transportation magnifies these scale effects resulting in increasing returns to scale (IRS) of a specific nature. It implies that there are positive externalities that arise because of the scale effects, externalities that can initiate cumulative economic growth at the regional level as well as at the national level (see, for example, Brathen and Hervick, 1997, and Hussain and Westin, 1997). The phenomenon is known as a spill-over effect. Previously the effect has been evaluated through the contribution of transportation infrastructure investment to economic growth. Since Auschauer=s (1989) paper many economists have found evidence of such a contribution (see, for example, Bonaglia and Ferrara, 2000 and Khanam, 1996). Intermodal transportation as it was defined at the very beginning is more than mere improvements in transportation infrastructure. From a theoretical standpoint, it posseses some characteristics of the general-purpose technologies (GPT), and it seems appropriate to regard it as an example of the GPT, which is discussed below. It appears reasonable to study intermodal transportation as a two-way improvement of an economic system=s productivity. On the one hand, it improves current operational functions of the system. On the other hand, it expands those functions. Both improvements are achieved by consolidating different transportation systems into a seamless transportation network that utilizes the comparative advantages of different transportation modes. Improvements due to intermodal transportation are associated with the increased productivity of transportation services and a reduction in logistic costs. The former results in an increased volume of transportation per unit cost, while the latter directly reduces costs of commodity production. Expansion of the intermodal transportation network is associated with economies of scale and better accessibility to input and output markets. The overall impact of intermodal transportation can be divided into four elements: (i) an increase in the volume of transportation in an existing transportation network; (ii) a reduction in logistic costs of current operations; (iii) the economies of scale associated with transportation network expansion; (iv) better accessibility to input and output markets. These four elements are discussed below in a sequence. Increase in volume of transportation in the existing network An increase in volume of transportation can lead to economies of density a specific scale effect. The economies of density exist if an increase in the volume of transportation in the network does not require a proportional increase in all inputs of the network. Usually the phenomenon is associated with an increase in the frequency of transportation (traffic) within the existing network (see Boyer, 1998 for a formal definition, Ciccone and Hall, 1996 for general discussion of economies of density, and Fujii, Im and Mak, 1992 for examples of economies of density in transportation). In the case of intermodal transportation, economies of density are achieved through cargo containerization, cargo consolidation and computer-guiding systems at intermodal facilities. Cargo containerization and consolidation result in an increased load factor of transportation vehicles and higher capacity utilization of the transportation fixed facilities, while utilization of computer-guiding systems results in higher labour productivity. For instance, in 1994 Burlington Northern Santa Fe Railway (BNSF) introduced the Alliance Intermodal Facility at Fort Worth, Texas, into its operations between Chicago and Los Angeles. According to OmniTRAX specialists, who operates the facility, BNSF has nearly doubled its volume of throughput at the intermodal facility since 1994. First, containerization of commodities being transported plus hubbing or cargo consolidation at the intermodal facility resulted in longer trains with higher frequency. Second, all day-to-day operations at the intermodal facility are governed by the Optimization Alternatives Strategic Intermodal Scheduler (OASIS) computer system, which allowed BNSF to handle more operations with less labour. Reduction in Logistic Costs Intermodal transportation is characterized by optimal frequency of service and modal choice and increased reliability. Combined, these two features define the just-in-time delivery -a major service produced by intermodal transportation. Furthermore, Blackburn (1991) argues that just-in-time d",
"title": ""
},
{
"docid": "7b717d6c4506befee2a374333055e2d1",
"text": "This is the pre-acceptance version, to read the final version please go to IEEE Geoscience and Remote Sensing Magazine on IEEE XPlore. Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a “black-box” solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization. X. Zhu and L. Mou are with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Germany and with Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), Germany, E-mails: [email protected]; [email protected]. D. Tuia was with the Department of Geography, University of Zurich, Switzerland. He is now with the Laboratory of GeoInformation Science and Remote Sensing, Wageningen University of Research, the Netherlands. E-mail: [email protected]. G.-S Xia and L. Zhang are with the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University. E-mail:[email protected]; [email protected]. F. Xu is with the Key Laboratory for Information Science of Electromagnetic Waves (MoE), Fudan Univeristy. E-mail: [email protected]. F. Fraundorfer is with the Institute of Computer Graphics and Vision, TU Graz, Austria and with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Germany. E-mail: [email protected]. The work of X. Zhu and L. Mou are supported by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No [ERC-2016-StG-714087], Acronym: So2Sat), Helmholtz Association under the framework of the Young Investigators Group “SiPEO” (VH-NG-1018, www.sipeo.bgu.tum.de) and China Scholarship Council. The work of D. Tuia is supported by the Swiss National Science Foundation (SNSF) under the project NO. PP0P2 150593. The work of G.-S. Xia and L. Zhang are supported by the National Natural Science Foundation of China (NSFC) projects with grant No. 41501462 and No. 41431175. The work of F. Xu are supported by the National Natural Science Foundation of China (NSFC) projects with grant No. 61571134. October 12, 2017 DRAFT ar X iv :1 71 0. 03 95 9v 1 [ cs .C V ] 1 1 O ct 2 01 7 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE, IN PRESS. 2",
"title": ""
},
{
"docid": "d57034a3fe20c3f66b1426bdf6cfb80c",
"text": "This paper presents a side-channel analysis of the bitstream encryption mechanism provided by Xilinx Virtex FPGAs. This work covers our results analyzing the Virtex-4 and Virtex-5 family showing that the encryption mechanism can be completely broken with moderate effort. The presented results provide an overview of a practical real-world analysis and should help practitioners to judge the necessity to implement side-channel countermeasures. We demonstrate sophisticated attacks on off-the-shelf FPGAs that go far beyond schoolbook attacks on 8-bit AES S-boxes. We were able to perform the key extraction by using only the measurements of a single power-up. Access to the key enables cloning and manipulating a design, which has been encrypted to protect the intellectual property and to prevent fraud. As a consequence, the target product faces serious threats like IP theft and more advanced attacks such as reverse engineering or the introduction of hardware Trojans. To the best of our knowledge, this is the first successful attack against the bitstream encryption of Xilinx Virtex-4 and Virtex-5 reported in open literature.",
"title": ""
},
{
"docid": "a3be253034ffcf61a25ad265fda1d4ff",
"text": "With the development of automated logistics systems, flexible manufacture systems (FMS) and unmanned automated factories, the application of automated guided vehicle (AGV) gradually become more important to improve production efficiency and logistics automatism for enterprises. The development of the AGV systems play an important role in reducing labor cost, improving working conditions, unifying information flow and logistics. Path planning has been a key issue in AGV control system. In this paper, two key problems, shortest time path planning and collision in multi AGV have been solved. An improved A-Star (A*) algorithm is proposed, which introduces factors of turning, and edge removal based on the improved A* algorithm is adopted to solve k shortest path problem. Meanwhile, a dynamic path planning method based on A* algorithm which searches effectively the shortest-time path and avoids collision has been presented. Finally, simulation and experiment have been conducted to prove the feasibility of the algorithm.",
"title": ""
},
{
"docid": "0ec17619360b449543017274c9640aff",
"text": "Conventional horizontal evolutionary prototyping for small-data system development is inadequate and too expensive for identifying, analyzing, and mitigating risks in big data system development. RASP (Risk-Based, Architecture-Centric Strategic Prototyping) is a model for cost-effective, systematic risk management in agile big data system development. It uses prototyping strategically and only in areas that architecture analysis can't sufficiently address. Developers use less costly vertical evolutionary prototypes instead of blindly building full-scale prototypes. An embedded multiple-case study of nine big data projects at a global outsourcing firm validated RASP. A decision flowchart and guidelines distilled from lessons learned can help architects decide whether, when, and how to do strategic prototyping. This article is part of a special issue on Software Engineering for Big Data Systems.",
"title": ""
},
{
"docid": "eef07c1edf8ea51fcd66327aa8edb45e",
"text": "Human lip-reading is a challenging task. It requires not only knowledge of underlying language but also visual clues to predict spoken words. Experts need certain level of experience and understanding of visual expressions learning to decode spoken words. Now-a-days, with the help of deep learning it is possible to translate lip sequences into meaningful words. The speech recognition in the noisy environments can be increased with the visual information [1]. To demonstrate this, in this project, we have tried to train two different deep-learning models for lip-reading: first one for video sequences using spatiotemporal convolution neural network, Bi-gated recurrent neural network and Connectionist Temporal Classification Loss, and second for audio that inputs the MFCC features to a layer of LSTM cells and output the sequence. We have also collected a small audio-visual dataset to train and test our model. Our target is to integrate our both models to improve the speech recognition in the noisy environment.",
"title": ""
},
{
"docid": "3533e733f0d418a0be1ec4af7e7740aa",
"text": "Visual depiction of the structure and evolution of science has been proposed as a key strategy for dealing with the large, complex, and increasingly interdisciplinary records of scientific communication. While every such visualization assumes the existence of spatial structures within the system of science, new methods and tools are rarely linked to thorough reflection on the underlying spatial concepts. Meanwhile, geographic information science has adopted a view of geographic space as conceptualized through the duality of discrete objects and continuous fields. This paper argues that conceptualization of science has been dominated by a view of its constituent elements (e.g., authors, articles, journals, disciplines) as discrete objects. It is proposed that, like in geographic information science, alternative concepts could be used for the same phenomenon. For example, one could view an author as either a discrete object at a specific location or as a continuous field occupying all of a discipline. It is further proposed that this duality of spatial concepts can extend to the methods by which low-dimensional geometric models of high-dimensional scientific spaces are created and used. This can result in new methods revealing different kinds of insights. This is demonstrated by a juxtaposition of two visualizations of an author’s intellectual evolution on the basis of either a discrete or continuous conceptualization. © 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4542d7d6f8109dcc9ade9e8fc44918bb",
"text": "This paper proposes a subject transfer framework for EEG classification. It aims to improve the classification performance when the training set of the target subject (namely user) is small owing to the need to reduce the calibration session. Our framework pursues improvement not only at the feature extraction stage, but also at the classification stage. At the feature extraction stage, we first obtain a candidate filter set for each subject through a previously proposed feature extraction method. Then, we design different criterions to learn two sparse subsets of the candidate filter set, which are called the robust filter bank and adaptive filter bank, respectively. Given robust and adaptive filter banks, at the classification step, we learn classifiers corresponding to these filter banks and employ a two-level ensemble strategy to dynamically and locally combine their outcomes to reach a single decision output. The proposed framework, as validated by experimental results, can achieve positive knowledge transfer for improving the performance of EEG classification.",
"title": ""
},
{
"docid": "22cb22b6a3f46b4ca3325be08ad9f077",
"text": "The purpose of this study was to evaluate setup accuracy and quantify random and systematic errors of the BrainLAB stereotactic immobilization mask and localization system using kV on-board imaging. Nine patients were simulated and set up with the BrainLAB stereotactic head immobilization mask and localizer to be treated for brain lesions using single and hypofractions. Orthogonal pairs of projections were acquired using a kV on-board imager mounted on a Varian Trilogy machine. The kV projections were then registered with digitally-reconstructed radiographs (DRR) obtained from treatment planning. Shifts between the kV images and reference DRRs were calculated in the different directions: anterior-posterior (A-P), medial-lateral (R-L) and superior-inferior (S-I). If the shifts were larger than 2mm in any direction, the patient was reset within the immobilization mask until satisfying setup accuracy based on image guidance has been achieved. Shifts as large as 4.5 mm, 5.0 mm, 8.0 mm in the A-P, R-L and S-I directions, respectively, were measured from image registration of kV projections and DRRs. These shifts represent offsets between the treatment and simulation setup using immobilization mask. The mean offsets of 0.1 mm, 0.7 mm, and -1.6 mm represent systematic errors of the BrainLAB localizer in the A-P, R-L and S-I directions, respectively. The mean of the radial shifts is about 1.7 mm. The standard deviations of the shifts were 2.2 mm, 2.0 mm, and 2.6 mm in A-P, R-L and S-I directions, respectively, which represent random patient setup errors with the BrainLAB mask. The Brain-LAB mask provides a noninvasive, practical and flexible immobilization system that keeps the patients in place during treatment. Relying on this system for patient setup might be associated with significant setup errors. Image guidance with the kV on-board imager provides an independent verification technique to ensure accuracy of patient setup. Since the patient may relax or move during treatment, uncontrolled and undetected setup errors may be produced with patients that are not well-immobilized. Therefore, the combination of stereotactic immobilization and image guidance achieves more controlled and accurate patient setup within 2mm in A-P, R-L and S-I directions.",
"title": ""
},
{
"docid": "ff59d6370b52f6e17d70669f20a03415",
"text": "Allport (1954) recognized that attachment to one’s ingroups does not necessarily require hostility toward outgroups. Yet the prevailing approach to the study of ethnocentrism, ingroup bias, and prejudice presumes that ingroup love and outgroup hate are reciprocally related. Findings from both cross-cultural research and laboratory experiments support the alternative view that ingroup identification is independent of negative attitudes toward outgoups and that much ingroup bias and intergroup discrimination is motivated by preferential treatment of ingroup members rather than direct hostility toward outgroup members. Thus to understand the roots of prejudice and discrimination requires first of all a better understanding of the functions that ingroup formation and identification serve for human beings. This article reviews research and theory on the motivations for maintenance of ingroup boundaries and the implications of ingroup boundary protection for intergroup relations, conflict, and conflict prevention.",
"title": ""
},
{
"docid": "3916e752fffbd121f5224a49883729d9",
"text": "Photovoltaic power plants (PVPPs) typically operate by tracking the maximum power point (MPP) in order to maximize the conversion efficiency. However, with the continuous increase of installed grid-connected PVPPs, power system operators have been experiencing new challenges, such as overloading, overvoltages, and operation during grid-voltage disturbances. Consequently, constant power generation (CPG) is imposed by grid codes. An algorithm for the calculation of the photovoltaic panel voltage reference, which generates a constant power from the PVPP, is introduced in this paper. The key novelty of the proposed algorithm is its applicability for both single- and two-stage PVPPs and flexibility to move the operation point to the right or left side of the MPP. Furthermore, the execution frequency of the algorithm and voltage increments between consecutive operating points are modified based on a hysteresis band controller in order to obtain fast dynamic response under transients and low-power oscillation during steady-state operation. The performance of the proposed algorithm for both single- and two-stage PVPPs is examined on a 50-kVA simulation setup of these topologies. Moreover, experimental results on a 1-kVA PV system validate the effectiveness of the proposed algorithm under various operating conditions, demonstrating functionalities of the proposed CPG algorithm.",
"title": ""
},
{
"docid": "95514c6f357115ef181b652eedd780fd",
"text": "Application Programming Interfaces (APIs) are a tremendous resource—that is, when they are stable. Several studies have shown that this is unfortunately not the case. Of those, a large-scale study of API changes in the Pharo Smalltalk ecosystem documented several findings about API deprecations and their impact on API clients. We conduct a partial replication of this study, considering more than 25,000 clients of five popular Java APIs on GitHub. This work addresses several shortcomings of the previous study, namely: a study of several distinct API clients in a popular, statically-typed language, with more accurate version information. We compare and contrast our findings with the previous study and highlight new ones, particularly on the API client update practices and the startling similarities between reaction behavior in Smalltalk and Java.",
"title": ""
},
{
"docid": "4f2a8e505a70c4204a2f36c4d8989713",
"text": "In our previous research, we examined whether minimally trained crowd workers could find, categorize, and assess sidewalk accessibility problems using Google Street View (GSV) images. This poster paper presents a first step towards combining automated methods (e.g., machine visionbased curb ramp detectors) in concert with human computation to improve the overall scalability of our approach.",
"title": ""
}
] |
scidocsrr
|
fa53a4ff95d811a1f39fdd8a7bec2ce5
|
No compromises: distributed transactions with consistency, availability, and performance
|
[
{
"docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0",
"text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.",
"title": ""
}
] |
[
{
"docid": "625c5c89b9f0001a3eed1ec6fb498c23",
"text": "About a 100 years ago, the Drosophila white mutant marked the birth of Drosophila genetics. The white gene turned out to encode the first well studied ABC transporter in arthropods. The ABC gene family is now recognized as one of the largest transporter families in all kingdoms of life. The majority of ABC proteins function as primary-active transporters that bind and hydrolyze ATP while transporting a large diversity of substrates across lipid membranes. Although extremely well studied in vertebrates for their role in drug resistance, less is known about the role of this family in the transport of endogenous and exogenous substances in arthropods. The ABC families of five insect species, a crustacean and a chelicerate have been annotated in some detail. We conducted a thorough phylogenetic analysis of the seven arthropod and human ABC protein subfamilies, to infer orthologous relationships that might suggest conserved function. Most orthologous relationships were found in the ABCB half transporter, ABCD, ABCE and ABCF subfamilies, but specific expansions within species and lineages are frequently observed and discussed. We next surveyed the role of ABC transporters in the transport of xenobiotics/plant allelochemicals and their involvement in insecticide resistance. The involvement of ABC transporters in xenobiotic resistance in arthropods is historically not well documented, but an increasing number of studies using unbiased differential gene expression analysis now points to their importance. We give an overview of methods that can be used to link ABC transporters to resistance. ABC proteins have also recently been implicated in the mode of action and resistance to Bt toxins in Lepidoptera. Given the enormous interest in Bt toxicology in transgenic crops, such findings will provide an impetus to further reveal the role of ABC transporters in arthropods. 2014 The Authors. Published by Elsevier Ltd. Open access under CC BY-NC-ND license.",
"title": ""
},
{
"docid": "4182770927ae68e5047906df446bafe9",
"text": "In this study, a square-shaped slot antenna is designed for the future fifth generation (5G) wireless applications. The antenna has a compact size of 0.64λg × 0.64λg at 38 GHz, which consists of ellipse shaped radiating patch fed by a 50 Q micro-strip line on the Rogers RT5880 substrates. A rectangle shaped slot is etched in the ground plane to enhance the antenna bandwidth. In order to obtain better impedance matching bandwidth of the antennas, some small circular radiating patches are added to the square-shaped slot. Simulations show that the measured impedance bandwidth of the proposed antenna ranges from 20 to 42 GHz for a reflection coefficient of Su less than −10dB which is cover 5G bands (28/38GHz). The proposed antenna provides almost omni-directional patterns, relatively flat gain, and high radiation efficiency through the frequency band.",
"title": ""
},
{
"docid": "d09d9d9f74079981f8f09e829e2af255",
"text": "Determination of sensitive and specific markers of very early AD progression is intended to aid researchers and clinicians to develop new treatments and monitor their effectiveness, as well as to lessen the time and cost of clinical trials. Magnetic Resonance (MR)-related biomarkers have been recently identified by the use of machine learning methods for the in vivo differential diagnosis of AD. However, the vast majority of neuroimaging papers investigating this topic are focused on the difference between AD and patients with mild cognitive impairment (MCI), not considering the impact of MCI patients who will (MCIc) or not convert (MCInc) to AD. Morphological T1-weighted MRIs of 137 AD, 76 MCIc, 134 MCInc, and 162 healthy controls (CN) selected from the Alzheimer's disease neuroimaging initiative (ADNI) cohort, were used by an optimized machine learning algorithm. Voxels influencing the classification between these AD-related pre-clinical phases involved hippocampus, entorhinal cortex, basal ganglia, gyrus rectus, precuneus, and cerebellum, all critical regions known to be strongly involved in the pathophysiological mechanisms of AD. Classification accuracy was 76% AD vs. CN, 72% MCIc vs. CN, 66% MCIc vs. MCInc (nested 20-fold cross validation). Our data encourage the application of computer-based diagnosis in clinical practice of AD opening new prospective in the early management of AD patients.",
"title": ""
},
{
"docid": "b505c23c5b3c924242ca6cf65fd4efc7",
"text": "Adolescent idiopathic scoliosis is a common disease with an overall prevalence of 0.47-5.2 % in the current literature. The female to male ratio ranges from 1.5:1 to 3:1 and increases substantially with increasing age. In particular, the prevalence of curves with higher Cobb angles is substantially higher in girls than in boys: The female to male ratio rises from 1.4:1 in curves from 10° to 20° up to 7.2:1 in curves >40°. Curve pattern and prevalence of scoliosis is not only influenced by gender, but also by genetic factors and age of onset. These data obtained from school screening programs have to be interpreted with caution, since methods and cohorts of the different studies are not comparable as age groups of the cohorts and diagnostic criteria differ substantially. We do need data from studies with clear standards of diagnostic criteria and study protocols that are comparable to each other.",
"title": ""
},
{
"docid": "22572394c6f522b70e1f14b8156a5601",
"text": "A new substrate integrated horn antenna with hard side walls combined with a couple of soft surfaces is introduced. The horn takes advantage of the air medium for propagation inside, while having a thickness of dielectric on the walls to realize hard conditions. The covering layers of the air-filled horn are equipped with strip-via arrays, which act as soft surfaces around the horn aperture to reduce the back radiations. The uniform amplitude distribution of the aperture resulting from the hard conditions and the phase correction combined with the profiled horn walls provided a narrow beamwidth and −13 dB sidelobe levels in the frequency of the hard condition, which is validated by the simulated and measured results.",
"title": ""
},
{
"docid": "af08bf07cc59217f0763275e04b3d62b",
"text": "Modern machine learning algorithms are increasingly being used in neuroimaging studies, such as the prediction of Alzheimer's disease (AD) from structural MRI. However, finding a good representation for multivariate brain MRI features in which their essential structure is revealed and easily extractable has been difficult. We report a successful application of a machine learning framework that significantly improved the use of brain MRI for predictions. Specifically, we used the unsupervised learning algorithm of local linear embedding (LLE) to transform multivariate MRI data of regional brain volume and cortical thickness to a locally linear space with fewer dimensions, while also utilizing the global nonlinear data structure. The embedded brain features were then used to train a classifier for predicting future conversion to AD based on a baseline MRI. We tested the approach on 413 individuals from the Alzheimer's Disease Neuroimaging Initiative (ADNI) who had baseline MRI scans and complete clinical follow-ups over 3 years with the following diagnoses: cognitive normal (CN; n=137), stable mild cognitive impairment (s-MCI; n=93), MCI converters to AD (c-MCI, n=97), and AD (n=86). We found that classifications using embedded MRI features generally outperformed (p<0.05) classifications using the original features directly. Moreover, the improvement from LLE was not limited to a particular classifier but worked equally well for regularized logistic regressions, support vector machines, and linear discriminant analysis. Most strikingly, using LLE significantly improved (p=0.007) predictions of MCI subjects who converted to AD and those who remained stable (accuracy/sensitivity/specificity: =0.68/0.80/0.56). In contrast, predictions using the original features performed not better than by chance (accuracy/sensitivity/specificity: =0.56/0.65/0.46). In conclusion, LLE is a very effective tool for classification studies of AD using multivariate MRI data. The improvement in predicting conversion to AD in MCI could have important implications for health management and for powering therapeutic trials by targeting non-demented subjects who later convert to AD.",
"title": ""
},
{
"docid": "7d86abdf71d6c9dd05fc41e63952d7bf",
"text": "Crowdsourced 3D CAD models are easily accessible online, and can potentially generate an infinite number of training images for almost any object category. We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the benchmark PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark.",
"title": ""
},
{
"docid": "b9838e512912f4bcaf3c224df3548d95",
"text": "In this paper, we develop a system for training human calligraphy skills. For such a development, the so-called dynamic font and augmented reality (AR) are employed. The dynamic font is used to generate a model character, in which the character are formed as the result of 3-dimensional motion of a virtual writing device on a virtual writing plane. Using the AR technology, we then produce a visual information consisting of not only static writing path but also dynamic writing process of model character. Such a visual information of model character is given some trainee through a head mounted display. The performance is demonstrated by some experimental studies.",
"title": ""
},
{
"docid": "92377bb2bc4e2daee041c5b78a5fcaf9",
"text": "Online discussions forums, known as forums for short, are conversational social cyberspaces constituting rich repositories of content and an important source of collaborative knowledge. However, most of this knowledge is buried inside the forum infrastructure and its extraction is both complex and difficult. The ability to automatically rate postings in online discussion forums, based on the value of their contribution, enhances the ability of users to find knowledge within this content. Several key online discussion forums have utilized collaborative intelligence to rate the value of postings made by users. However, a large percentage of posts go unattended and hence lack appropriate rating.\n In this paper, we focus on automatic rating of postings in online discussion forums. A set of features derived from the posting content and the threaded discussion structure are generated for each posting. These features are grouped into five categories, namely (i) relevance, (ii) originality, (iii) forum-specific features, (iv) surface features, and (v) posting-component features. Using a non-linear SVM classifier, the value of each posting is categorized into one of three levels High, Medium, or Low. This rating represents a seed value for each posting that is leveraged in filtering forum content. Experimental results have shown promising performance on forum data.",
"title": ""
},
{
"docid": "1153287a3a5cde9f6bbacb83dffecdf3",
"text": "This communication deals with the design of a <inline-formula> <tex-math notation=\"LaTeX\">$16\\times 16$ </tex-math></inline-formula> slot array antenna fed by inverted microstrip gap waveguide (IMGW). The whole structure designed in this communication consists of radiating slots, a groove gap cavity layer, a distribution feeding network, and a transition from standard WR-15 waveguide to the IMGW. First, a <inline-formula> <tex-math notation=\"LaTeX\">$2\\times 2$ </tex-math></inline-formula> cavity-backed slot subarray is designed with periodic boundary condition to achieve good performances of radiation pattern and directivity. Then, a complete IMGW feeding network with a transition from WR-15 rectangular waveguide to the IMGW has been realized to excite the radiating slots. The complete antenna array is designed at 60-GHz frequency band and fabricated using Electrical Discharging Machining Technology. The measurements show that the antenna has a 16.95% bandwidth covering 54–64-GHz frequency range. The measured gain of the antenna is more than 28 dBi with the efficiency higher than 40% covering 54–64-GHz frequency range.",
"title": ""
},
{
"docid": "d2541bdc0eb9bf65fdeb1e50358c62eb",
"text": "Data management is a crucial aspect in the Internet of Things (IoT) on Cloud. Big data is about the processing and analysis of large data repositories on Cloud computing. Big document summarization method is an important technique for data management of IoT. Traditional document summarization methods are restricted to summarize suitable information from the exploding IoT big data on Cloud. This paper proposes a big data (i.e., documents, texts) summarization method using the extracted semantic feature which it is extracted by distributed parallel processing of NMF based cloud technique of Hadoop. The proposed method can well represent the inherent structure of big documents set using the semantic feature by the non-negative matrix factorization (NMF). In addition, it can summarize the big data size of document for IoT using the distributed parallel processing based on Hadoop. The experimental results demonstrate that the proposed method can summarize the big data document comparing with the single node of summarization methods. 1096 Yoo-Kang Ji et al.",
"title": ""
},
{
"docid": "8439dbba880179895ab98a521b4c254f",
"text": "Given the increase in demand for sustainable livelihoods for coastal villagers in developing countries and for the commercial eucheumoid Kappaphycus alvarezii (Doty) Doty, for the carrageenan industry, there is a trend towards introducing K. alvarezii to more countries in the tropical world for the purpose of cultivation. However, there is also increasing concern over the impact exotic species have on endemic ecosystems and biodiversity. Quarantine and introduction procedures were tested in northern Madagascar and are proposed for all future introductions of commercial eucheumoids (K. alvarezii, K. striatum and Eucheuma denticulatum). In addition, the impact and extent of introduction of K. alvarezii was measured on an isolated lagoon in the southern Lau group of Fiji. It is suggested that, in areas with high human population density, the overwhelming benefits to coastal ecosystems by commercial eucheumoid cultivation far outweigh potential negative impacts. However, quarantine and introduction procedures should be followed. In addition, introduction should only take place if a thorough survey has been conducted and indicates the site is appropriate. Subsequently, the project requires that a well designed and funded cultivation development programme, with a management plan and an assured market, is in place in order to make certain cultivation, and subsequently the introduced algae, will not be abandoned at a later date. KAPPAPHYCUS ALVAREZI",
"title": ""
},
{
"docid": "62b8d1ecb04506794f81a47fccb63269",
"text": "This paper addresses the mode collapse for generative adversarial networks (GANs). We view modes as a geometric structure of data distribution in a metric space. Under this geometric lens, we embed subsamples of the dataset from an arbitrary metric space into the `2 space, while preserving their pairwise distance distribution. Not only does this metric embedding determine the dimensionality of the latent space automatically, it also enables us to construct a mixture of Gaussians to draw latent space random vectors. We use the Gaussian mixture model in tandem with a simple augmentation of the objective function to train GANs. Every major step of our method is supported by theoretical analysis, and our experiments on real and synthetic data confirm that the generator is able to produce samples spreading over most of the modes while avoiding unwanted samples, outperforming several recent GAN variants on a number of metrics and offering new features.",
"title": ""
},
{
"docid": "77af48f5bb5bc77565665944b16d144e",
"text": "We examine a protocol πbeacon that outputs unpredictable and publicly verifiable randomness, meaning that the output is unknown at the time that πbeacon starts, yet everyone can verify that the output is close to uniform after πbeacon terminates. We show that πbeacon can be instantiated via Bitcoin under sensible assumptions; in particular we consider an adversary with an arbitrarily large initial budget who may not operate at a loss indefinitely. In case the adversary has an infinite budget, we provide an impossibility result that stems from the similarity between the Bitcoin model and Santha-Vazirani sources. We also give a hybrid protocol that combines trusted parties and a Bitcoin-based beacon.",
"title": ""
},
{
"docid": "e1c927d7fbe826b741433c99fff868d0",
"text": "Multiclass maps are scatterplots, multidimensional projections, or thematic geographic maps where data points have a categorical attribute in addition to two quantitative attributes. This categorical attribute is often rendered using shape or color, which does not scale when overplotting occurs. When the number of data points increases, multiclass maps must resort to data aggregation to remain readable. We present multiclass density maps: multiple 2D histograms computed for each of the category values. Multiclass density maps are meant as a building block to improve the expressiveness and scalability of multiclass map visualization. In this article, we first present a short survey of aggregated multiclass maps, mainly from cartography. We then introduce a declarative model—a simple yet expressive JSON grammar associated with visual semantics—that specifies a wide design space of visualizations for multiclass density maps. Our declarative model is expressive and can be efficiently implemented in visualization front-ends such as modern web browsers. Furthermore, it can be reconfigured dynamically to support data exploration tasks without recomputing the raw data. Finally, we demonstrate how our model can be used to reproduce examples from the past and support exploring data at scale.",
"title": ""
},
{
"docid": "448d4704991a2bdc086df8f0d7920ec5",
"text": "Global progress in the industrial field, which has led to the definition of the Industry 4.0 concept, also affects other spheres of life. One of them is the education. The subject of the article is to summarize the emerging trends in education in relation to the requirements of Industry 4.0 and present possibilities of their use. One option is using augmented reality as part of a modular learning system. The main idea is to combine the elements of the CPS technology concept with modern IT features, with emphasis on simplicity of solution and hardware ease. The synthesis of these principles can combine in a single image on a conventional device a realistic view at the technological equipment, complemented with interactive virtual model of the equipment, the technical data and real-time process information.",
"title": ""
},
{
"docid": "0cb6bbe889acb5b54043ba9cedbb4496",
"text": "This paper presents a fusion design approach of high-performance filtering balun based on the ringshaped dielectric resonator (DR) for the first time. According to the electromagnetic (EM) field properties of the TE01δ mode of the DR cavity, it can be differentially driven or extracted by reasonably placing the orientations of the feeding probes, which answers for the realization of unbalanced-to-balanced conversion. As a result, the coupling between the resonators can refer to the traditional single-ended design, regardless of the feeding scheme. Based on this, a second-order DR filtering balun is designed by converting a four-port balanced filter to a three-port device. Within the passband, the excellent performance of amplitude balance and 180° phase difference at the balun outputs can be achieved. To improve the stopband rejection by suppressing the spurious responses of the DR cavity, a third-order filtering balun using the hybrid DR and coaxial resonator is designed. It is not rigorously symmetrical, which is different from the traditional designs. The simulated and measured results with good accordance showcase good filter and balun functions at the same time.",
"title": ""
},
{
"docid": "66d5e414e54c657c026fe0e7537c94ee",
"text": "A mode-reconfigurable Butterworth bandpass filter, which can be switched between operating as a single-mode-dual-band (SMDB) and a dual-mode-single-band (DMSB) filter is presented. The filter is realized using a substrate integrated waveguide in a square cuboid geometry. Switching is enabled by using empty vias for the SMDB and liquid metal filled vias for the DMSB. The first two modes of the SMDB resonate 3 GHz apart, whereas the first two modes of the DMSB are degenerate and resonate only at the higher frequency. This is due to mode shifting of the first frequency band to the second frequency band. Measurements confirm the liquid-metal reconfiguration between the two operating modes.",
"title": ""
},
{
"docid": "64d3ecaa2f9e850cb26aac0265260aff",
"text": "The case of the Frankfurt Airport attack in 2011 in which a 21-year-old man shot several U.S. soldiers, murdering 2 U.S. airmen and severely wounding 2 others, is assessed with the Terrorist Radicalization Assessment Protocol (TRAP-18). The study is based on an extensive qualitative analysis of investigation and court files focusing on the complex interconnection among offender personality, specific opportunity structures, and social contexts. The role of distal psychological factors and proximal warning behaviors in the run up to the deed are discussed. Although in this case the proximal behaviors of fixation on a cause and identification as a “soldier” for the cause developed over years, we observed only a very brief and accelerated pathway toward the violent act. This represents an important change in the demands placed upon threat assessors.",
"title": ""
},
{
"docid": "2f9f21740603b7a84abd57d7c7c02c11",
"text": "Emerging Non-Volatile Memory (NVM) technologies are explored as potential alternatives to traditional SRAM/DRAM-based memory architecture in future microprocessor design. One of the major disadvantages for NVM is the latency and energy overhead associated with write operations. Mitigation techniques to minimize the write overhead for NVM-based main memory architecture have been studied extensively. However, most prior work focuses on optimization techniques for NVM-based main memory itself, with little attention paid to cache management policies for the Last-Level Cache (LLC).\n In this article, we propose a Writeback-Aware Dynamic CachE (WADE) management technique to help mitigate the write overhead in NVM-based memory.<sup;>1</sup;> The proposal is based on the observation that, when dirty cache blocks are evicted from the LLC and written into NVM-based memory (with PCM as an example), the long latency and high energy associated with write operations to NVM-based memory can cause system performance/power degradation. Thus, reducing the number of writeback requests from the LLC is critical.\n The proposed WADE cache management technique tries to keep highly reused dirty cache blocks in the LLC. The technique predicts blocks that are frequently written back in the LLC. The LLC sets are dynamically partitioned into a frequent writeback list and a nonfrequent writeback list. It keeps a best size of each list in the LLC. Our evaluation shows that the technique can reduce the number of writeback requests by 16.5% for memory-intensive single-threaded benchmarks and 10.8% for multicore workloads. It yields a geometric mean speedup of 5.1% for single-thread applications and 7.6% for multicore workloads. Due to the reduced number of writeback requests to main memory, the technique reduces the energy consumption by 8.1% for single-thread applications and 7.6% for multicore workloads.",
"title": ""
}
] |
scidocsrr
|
b748f0b146ddf052bd5f154905e8db12
|
Flexible Multimodal Tactile Sensing System for Object Identification
|
[
{
"docid": "f8435db6c6ea75944d1c6b521e0f3dd3",
"text": "We present the design, fabrication process, and characterization of a multimodal tactile sensor made of polymer materials and metal thin film sensors. The multimodal sensor can detect the hardness, thermal conductivity, temperature, and surface contour of a contact object for comprehensive evaluation of contact objects and events. Polymer materials reduce the cost and the fabrication complexity for the sensor skin, while increasing mechanical flexibility and robustness. Experimental tests show the skin is able to differentiate between objects using measured properties. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "90ca336fa0d6aae07914f03df9bbc2ad",
"text": "Planning-based techniques are a very powerful tool for automated story generation. However, as the number of possible actions increases, traditional planning techniques suffer from a combinatorial explosion due to large branching factors. In this work, we apply Monte Carlo Tree Search (MCTS) techniques to generate stories in domains with large numbers of possible actions (100+). Our approach employs a Bayesian story evaluation method to guide the planning towards believable stories that reach a user defined goal. We generate stories in a novel domain with different type of story goals. Our approach shows an order of magnitude improvement in performance over traditional search techniques.",
"title": ""
},
{
"docid": "260c12152d9bd38bd0fde005e0394e17",
"text": "On the initiative of the World Health Organization, two meetings on the Standardization of Reporting Results of Cancer Treatment have been held with representatives and members of several organizations. Recommendations have been developed for standardized approaches to the recording of baseline data relating to the patient, the tumor, laboratory and radiologic data, the reporting of treatment, grading of acute and subacute toxicity, reporting of response, recurrence and disease-free interval, and reporting results of therapy. These recommendations, already endorsed by a number of organizations, are proposed for international acceptance and use to make it possible for investigators to compare validly their results with those of others.",
"title": ""
},
{
"docid": "0e0c1004ad3bf29c5a855531a5185991",
"text": "At Facebook, our data systems process huge volumes of data, ranging from hundreds of terabytes in memory to hundreds of petabytes on disk. We categorize our systems as “small data” or “big data” based on the type of queries they run. Small data refers to OLTP-like queries that process and retrieve a small amount of data, for example, the 1000s of objects necessary to render Facebook's personalized News Feed for each person. These objects are requested by their ids; indexes limit the amount of data accessed during a single query, regardless of the total volume of data. Big data refers to queries that process large amounts of data, usually for analysis: trouble-shooting, identifying trends, and making decisions. Big data stores are the workhorses for data analysis at Facebook. They grow by millions of events (inserts) per second and process tens of petabytes and hundreds of thousands of queries per day. In this tutorial, we will describe our data systems and the current challenges we face. We will lead a discussion on these challenges, approaches to solve them, and potential pitfalls. We hope to stimulate interest in solving these problems in the research community.",
"title": ""
},
{
"docid": "213acf777983f4339d6ee25a4467b1be",
"text": "RoadGraph is a graph based environmental model for driver assistance systems. It integrates information from different sources like digital maps, onboard sensors and V2X communication into one single model about vehicle's environment. At the moment of information aggregation some function independent situation analysis is done. In this paper the concept of the RoadGraph is described in detail and first results are shown.",
"title": ""
},
{
"docid": "5dbc520fbac51f9cc1d13480e7bfb603",
"text": "In 1899, Nikola Tesla, who had devised a type of resonant transformer called the Tesla coil, achieved a major breakthrough in his work by transmitting 100 million volts of electric power wirelessly over a distance of 26 miles to light up a bank of 200 light bulbs and run one electric motor. Tesla claimed to have achieved 95% efficiency, but the technology had to be shelved because the effects of transmitting such high voltages in electric arcs would have been disastrous to humans and electrical equipment in the vicinity. This technology has been languishing in obscurity for a number of years, but the advent of portable devices such as mobiles, laptops, smartphones, MP3 players, etc warrants another look at the technology. We propose the use of a new technology, based on strongly coupled magnetic resonance. It consists of a transmitter, a current carrying copper coil, which acts as an electromagnetic resonator and a receiver, another copper coil of similar dimensions to which the device to be powered is attached. The transmitter emits a non-radiative magnetic field resonating at MHz frequencies, and the receiving unit resonates in that field. The resonant nature of the process ensures a strong interaction between the sending and receiving unit, while interaction with rest of the environment is weak.",
"title": ""
},
{
"docid": "ee6906550c2f9d294e411688bae5db71",
"text": "This position paper formalises an abstract model for complex negotiation dialogue. This model is to be used for the benchmark of optimisation algorithms ranging from Reinforcement Learning to Stochastic Games, through Transfer Learning, One-Shot Learning or others.",
"title": ""
},
{
"docid": "5eab47907e673449ad73ec6cef30bc07",
"text": "Three-dimensional circuits built upon multiple layers of polyimide are required for constructing Si/SiGe monolithic microwave/mm-wave integrated circuits on low resistivity Si wafers. However, the closely spaced transmission lines are susceptible to high levels of cross-coupling, which degrades the overall circuit performance. In this paper, theoretical and experimental results on coupling of Finite Ground Coplanar (FGC) waveguides embedded in polyimide layers are presented for the first time. These results show that FGC lines have approximately 8 dB lower coupling than coupled Coplanar Waveguides. Furthermore, it is shown that the forward and backward coupling characteristics for FGC lines do not resemble the coupling characteristics of other transmission lines such as microstrip.",
"title": ""
},
{
"docid": "a0aa33c4afa58bd4dff7eb209bfb7924",
"text": "OBJECTIVE\nTo assess whether frequent marijuana use is associated with residual neuropsychological effects.\n\n\nDESIGN\nSingle-blind comparison of regular users vs infrequent users of marijuana.\n\n\nPARTICIPANTS\nTwo samples of college undergraduates: 65 heavy users, who had smoked marijuana a median of 29 days in the last 30 days (range, 22 to 30 days) and who also displayed cannabinoids in their urine, and 64 light users, who had smoked a median of 1 day in the last 30 days (range, 0 to 9 days) and who displayed no urinary cannabinoids.\n\n\nINTERVENTION\nSubjects arrived at 2 PM on day 1 of their study visit, then remained at our center overnight under supervision. Neuropsychological tests were administered to all subjects starting at 9 AM on day 2. Thus, all subjects were abstinent from marijuana and other drugs for a minimum of 19 hours before testing.\n\n\nMAIN OUTCOME MEASURES\nSubjects received a battery of standard neuropsychological tests to assess general intellectual functioning, abstraction ability, sustained attention, verbal fluency, and ability to learn and recall new verbal and visuospatial information.\n\n\nRESULTS\nHeavy users displayed significantly greater impairment than light users on attention/executive functions, as evidenced particularly by greater perseverations on card sorting and reduced learning of word lists. These differences remained after controlling for potential confounding variables, such as estimated levels of premorbid cognitive functioning, and for use of alcohol and other substances in the two groups.\n\n\nCONCLUSIONS\nHeavy marijuana use is associated with residual neuropsychological effects even after a day of supervised abstinence from the drug. However, the question remains open as to whether this impairment is due to a residue of drug in the brain, a withdrawal effect from the drug, or a frank neurotoxic effect of the drug. from marijuana",
"title": ""
},
{
"docid": "c75328d500b9a399ee9f5eeb8a0f979d",
"text": "Denial of Service (DoS) attacks continue to grow in magnitude, duration, and frequency increasing the demand for techniques to protect services from disruption, especially at a low cost. We present Denial of Service Elusion (DoSE) as an inexpensive method for mitigating network layer attacks by utilizing cloud infrastructure and content delivery networks to protect services from disruption. DoSE uses these services to create a relay network between the client and the protected service that evades attack by selectively releasing IP address information. DoSE incorporates client reputation as a function of prior behavior to stop attackers along with a feedback controller to limit costs. We evaluate DoSE by modeling relays, clients, and attackers in an agent-based MATLAB simulator. The results show DoSE can mitigate a single-insider attack on 1,000 legitimate clients in 3.9 minutes while satisfying an average of 88.2% of requests during the attack.",
"title": ""
},
{
"docid": "a42b9567dfc9e9fe92bc9aeb38ef5e5a",
"text": "This paper presents a physical model for planar spiral inductors on silicon, which accounts for eddy current effect in the conductor, crossover capacitance between the spiral and center-tap, capacitance between the spiral and substrate, substrate ohmic loss, and substrate capacitance. The model has been confirmed with measured results of inductors having a wide range of layout and process parameters. This scalable inductor model enables the prediction and optimization of inductor performance.",
"title": ""
},
{
"docid": "733ddc5a642327364c2bccb6b1258fac",
"text": "Human memory is unquestionably a vital cognitive ability but one that can often be unreliable. External memory aids such as diaries, photos, alarms and calendars are often employed to assist in remembering important events in our past and future. The recent trend for lifelogging, continuously documenting ones life through wearable sensors and cameras, presents a clear opportunity to augment human memory beyond simple reminders and actually improve its capacity to remember. This article surveys work from the fields of computer science and psychology to understand the potential for such augmentation, the technologies necessary for realising this opportunity and to investigate what the possible benefits and ethical pitfalls of using such technology might be.",
"title": ""
},
{
"docid": "36f73143b6f4d80e8f1d77505fabbfcf",
"text": "Progress of IoT and ubiquitous computing technologies has strong anticipation to realize smart services in households such as efficient energy-saving appliance control and elderly monitoring. In order to put those applications into practice, high-accuracy and low-cost in-home living activity recognition is essential. Many researches have tackled living activity recognition so far, but the following problems remain: (i)privacy exposure due to utilization of cameras and microphones; (ii) high deployment and maintenance costs due to many sensors used; (iii) burden to force the user to carry the device and (iv) wire installation to supply power and communication between sensor node and server; (v) few recognizable activities; (vi) low recognition accuracy. In this paper, we propose an in-home living activity recognition method to solve all the problems. To solve the problems (i)--(iv), our method utilizes only energy harvesting PIR and door sensors with a home server for data collection and processing. The energy harvesting sensor has a solar cell to drive the sensor and wireless communication modules. To solve the problems (v) and (vi), we have tackled the following challenges: (a) determining appropriate features for training samples; and (b) determining the best machine learning algorithm to achieve high recognition accuracy; (c) complementing the dead zone of PIR sensor semipermanently. We have conducted experiments with the sensor by five subjects living in a home for 2-3 days each. As a result, the proposed method has achieved F-measure: 62.8% on average.",
"title": ""
},
{
"docid": "c78e0662b9679a70f1ec4416b3abd2b4",
"text": "This article offers possibly the first peer-reviewed study on the training routines of elite eathletes, with special focus on the subjects’ physical exercise routines. The study is based on a sample of 115 elite e-athletes. According to their responses, e-athletes train approximately 5.28 hours every day around the year on the elite level. Approximately 1.08 hours of that training is physical exercise. More than half (55.6%) of the elite e-athletes believe that integrating physical exercise in their training programs has a positive effect on esport performance; however, no less than 47.0% of the elite e-athletes do their physical exercise chiefly to maintain overall health. Accordingly, the study indicates that elite e-athletes are active athletes as well, those of age 18 and older exercising physically more than three times the daily 21-minute activity recommendation given by World Health Organization.",
"title": ""
},
{
"docid": "68058500fd6dbbc60104a0985fecd4a8",
"text": "Instagram, a popular global mobile photo-sharing platform, involves various user interactions centered on posting images accompanied by hashtags. Participatory hashtagging, one of these diverse tagging practices, has great potential to be a communication channel for various organizations and corporations that would like to interact with users on social media. In this paper, we aim to characterize participatory hashtagging behaviors on Instagram by conducting a case study of its representative hashtagging practice, the Weekend Hashtag Project, or #WHP. By conducting a user study using both quantitative and qualitative methods, we analyzed the way Instagram users respond to participation calls and identified factors that motivate users to take part in the project. Based on these findings, we provide design strategies for any interested parties to interact with users on social media.",
"title": ""
},
{
"docid": "8c6ec02821d17fbcf79d1a42ed92a971",
"text": "OBJECTIVE\nTo explore whether an association exists between oocyte meiotic spindle morphology visualized by polarized light microscopy at the time of intracytoplasmic sperm injection and the ploidy of the resulting embryo.\n\n\nDESIGN\nProspective cohort study.\n\n\nSETTING\nPrivate IVF clinic.\n\n\nPATIENT(S)\nPatients undergoing preimplantation genetic screening/diagnosis (n = 113 patients).\n\n\nINTERVENTION(S)\nOocyte meiotic spindles were assessed by polarized light microscopy and classified at the time of intracytoplasmic sperm injection as normal, dysmorphic, translucent, telophase, or no visible spindle. Single blastomere biopsy was performed on day 3 of culture for analysis by array comparative genomic hybridization.\n\n\nMAIN OUTCOME MEASURE(S)\nSpindle morphology and embryo ploidy association was evaluated by regression methods accounting for non-independence of data.\n\n\nRESULT(S)\nThe frequency of euploidy in embryos derived from oocytes with normal spindle morphology was significantly higher than all other spindle classifications combined (odds ratio [OR] 1.93, 95% confidence interval [CI] 1.33-2.79). Oocytes with translucent (OR 0.25, 95% CI 0.13-0.46) and no visible spindle morphology (OR 0.35, 95% CI 0.19-0.63) were significantly less likely to result in euploid embryos when compared with oocytes with normal spindle morphology. There was no significant difference between normal and dysmorphic spindle morphology (OR 0.73, 95% CI 0.49-1.08), whereas no telophase spindles resulted in euploid embryos (n = 11). Assessment of spindle morphology was found to be independently associated with embryo euploidy after controlling for embryo quality (OR 1.73, 95% CI 1.16-2.60).\n\n\nCONCLUSION(S)\nOocyte spindle morphology is associated with the resulting embryo's ploidy. Oocytes with normal spindle morphology are significantly more likely to produce euploid embryos compared with oocytes with meiotic spindles that are translucent or not visible.",
"title": ""
},
{
"docid": "134f44bb808d5e873161819ebb175af5",
"text": "Like most behavior, consumer behavior too is goal driven. In turn, goals constitute cognitive constructs that can be chronically active as well as primed by features of the environment. Goal systems theory outlines the principles that characterize the dynamics of goal pursuit and explores their implications for consumer behavior. In this vein, we discuss from a common, goal systemic, perspective a variety of well known phenomena in the realm of consumer behavior including brand loyalty, variety seeking, impulsive buying, preferences, choices and regret. The goal systemic perspective affords guidelines for subsequent research on the dynamic aspects of consummatory behavior as well as offering insights into practical matters in the area of marketing. © 2011 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "f80458241f0a33aebd8044bf85bd25ec",
"text": "Brachial–ankle pulse wave velocity (baPWV) is a promising technique to assess arterial stiffness conveniently. However, it is not known whether baPWV is associated with well-established indices of central arterial stiffness. We determined the relation of baPWV with aortic (carotid-femoral) PWV, leg (femoral-ankle) PWV, and carotid augmentation index (AI) by using both cross-sectional and interventional approaches. First, we studied 409 healthy adults aged 18–76 years. baPWV correlated significantly with aortic PWV (r=0.76), leg PWV (r=0.76), and carotid AI (r=0.52). A stepwise regression analysis revealed that aortic PWV was the primary independent correlate of baPWV, explaining 58% of the total variance in baPWV. Additional 23% of the variance was explained by leg PWV. Second, 13 sedentary healthy men were studied before and after a 16-week moderate aerobic exercise intervention (brisk walking to jogging; 30–45 min/day; 4–5 days/week). Reductions in aortic PWV observed with the exercise intervention were significantly and positively associated with the corresponding changes in baPWV (r=0.74). A stepwise regression analysis revealed that changes in aortic PWV were the only independent correlate of changes in baPWV (β=0.74), explaining 55% of the total variance. These results suggest that baPWV may provide qualitatively similar information to those derived from central arterial stiffness although some portions of baPWV may be determined by peripheral arterial stiffness.",
"title": ""
},
{
"docid": "3e28cbfc53f6c42bb0de2baf5c1544aa",
"text": "Cloud computing is an emerging paradigm which allows the on-demand delivering of software, hardware, and data as services. As cloud-based services are more numerous and dynamic, the development of efficient service provisioning policies become increasingly challenging. Game theoretic approaches have shown to gain a thorough analytical understanding of the service provisioning problem.\n In this paper we take the perspective of Software as a Service (SaaS) providers which host their applications at an Infrastructure as a Service (IaaS) provider. Each SaaS needs to comply with quality of service requirements, specified in Service Level Agreement (SLA) contracts with the end-users, which determine the revenues and penalties on the basis of the achieved performance level. SaaS providers want to maximize their revenues from SLAs, while minimizing the cost of use of resources supplied by the IaaS provider. Moreover, SaaS providers compete and bid for the use of infrastructural resources. On the other hand, the IaaS wants to maximize the revenues obtained providing virtualized resources. In this paper we model the service provisioning problem as a Generalized Nash game, and we propose an efficient algorithm for the run time management and allocation of IaaS resources to competing SaaSs.",
"title": ""
},
{
"docid": "7f94ebc8ebdde9e337e6dd345c5c529e",
"text": "Forms are a standard way of gathering data into a database. Many applications need to support multiple users with evolving data gathering requirements. It is desirable to automatically link dynamic forms to the back-end database. We have developed the FormMapper system, a fully automatic solution that accepts user-created data entry forms, and maps and integrates them into an existing database in the same domain. The solution comprises of two components: tree extraction and form integration. The tree extraction component leverages a probabilistic process, Hidden Markov Model (HMM), for automatically extracting a semantic tree structure of a form. In the form integration component, we develop a merging procedure that maps and integrates a tree into an existing database and extends the database with desired properties. We conducted experiments evaluating the performance of the system on several large databases designed from a number of complex forms. Our experimental results show that the FormMapper system is promising: It generated databases that are highly similar (87% overlapped) to those generated by the human experts, given the same set of forms.",
"title": ""
}
] |
scidocsrr
|
fcc6920170f98b75c574df60aee313c8
|
Versatile and robust 3D walking with a simulated humanoid robot (Atlas): A model predictive control approach
|
[
{
"docid": "6b3da7a62570e083c2ca27a4287d6d8d",
"text": "In the area of biped robot research, much progress has been made in the past few years. However, some difficulties remain to be dealt with, particularly about the implementation of fast and dynamic walking gaits, in other words anthropomorphic gaits, especially on uneven terrain. In this perspective, both concepts of center of pressure (CoP) and zero moment point (ZMP) are obviously useful. In this paper, the two concepts are strictly defined, the CoP with respect to ground-feet contact forces, the ZMP with respect to gravity plus inertia forces. Then, the coincidence of CoP and ZMP is proven, and related control aspects are examined. Finally, a virtual CoP-ZMP is defined, allowing us to extend the concept when walking on uneven terrain. This paper is a theoretical study. Experimental results are presented in a companion paper, analyzing the evolution of the ground contact forces obtained from a human walker wearing robot feet as shoes.",
"title": ""
}
] |
[
{
"docid": "7663ad8e4f8307e8bb31b0dc92457502",
"text": "Computerized clinical decision support (CDS) aims to aid decision making of health care providers and the public by providing easily accessible health-related information at the point and time it is needed. natural language processing (NLP) is instrumental in using free-text information to drive CDS, representing clinical knowledge and CDS interventions in standardized formats, and leveraging clinical narrative. The early innovative NLP research of clinical narrative was followed by a period of stable research conducted at the major clinical centers and a shift of mainstream interest to biomedical NLP. This review primarily focuses on the recently renewed interest in development of fundamental NLP methods and advances in the NLP systems for CDS. The current solutions to challenges posed by distinct sublanguages, intended user groups, and support goals are discussed.",
"title": ""
},
{
"docid": "b2020c256ed9ec225efa6f61a3dbe198",
"text": "Since its introduction in 2011, the ISO 26262 standard has provided the state-of-the-art methodology for achieving functional safety of automotive electrical and electronic systems. Among other requirements, the standard requires estimation of quantified metrics such as the Probabilistic Metric for Hardware Failure (PMHF) using quantitative failure analysis techniques. While the standard provides some brief guidance, a complete methodology to calculate the PMHF in detail has not been well described in literature. This paper will draw out several key frameworks for successfully calculating the probabilistic metric for hardware failure using Fault Tree Analysis (FTA). At the top levels of the analysis, methods drawn from previous literature can be used to organize potential failures within a complex multifunctional system. At the lower levels of the FTA, the effects of all fault categories, including dual-point latent and detected faults, can be accounted for using appropriate diagnostic coverage and proof-test interval times. A simple example is developed throughout the paper to demonstrate the methods. Some simplifications are proposed to estimate an upper bound on the PMHF. Conclusions are drawn related to the steps and methods employed, and the nature of PMHF calculation in practical real-world systems.",
"title": ""
},
{
"docid": "957513955d09b0a878ea6719c7314200",
"text": "Attention Deficit Hyperactivity Disorder (ADHD) is a childhood syndrome characterized by short attention span, impulsiveness, and hyperactivity, which often lead ? to learning disabilities and various behavioral problems. For the treatment of ADHD, medication and cognitive-behavior therapy is applied in recent years. Although psychostimulant medication has been widely used for many years, current findings suggest that, as the sole treatment for ADHD, it is an inadequate form of intervention in that parents don’t want their child to use drug and the effects are limited to the period in which the drugs are physiologically active. On the other hand, EEG biofeedback treatment studies for ADHD have reported promising results not only in significant reductions in hyperactive, inattentive, and disruptive behaviors, but also improvements in academic performance and IQ scores. However it is too boring for children to finish the whole treatment. The recent increase in computer usage in medicine and rehabilitation has changed the way health care is delivered. Virtual Reality technology provides specific stimuli that can be used in removing distractions and providing environments that get the subjects’ attention and increasing their ability to concentrate. And Virtual Reality technology can hold a patient’s attention for a longer period of time than other methods can, because VR is immersive, interactive and imaginal. Based on these aspects, we developed Attention Enhancement System (AES) using Virtual Reality technology and EEG biofeedback for assessing and treating ADHD children as well as increasing the attention span of children who have attention difficulty.",
"title": ""
},
{
"docid": "e2427ff836c8b83a75d8f7074656a025",
"text": "With the rapid growth of smartphone and tablet users, Device-to-Device (D2D) communications have become an attractive solution for enhancing the performance of traditional cellular networks. However, relevant security issues involved in D2D communications have not been addressed yet. In this paper, we investigate the security requirements and challenges for D2D communications, and present a secure and efficient key agreement protocol, which enables two mobile devices to establish a shared secret key for D2D communications without prior knowledge. Our approach is based on the Diffie-Hellman key agreement protocol and commitment schemes. Compared to previous work, our proposed protocol introduces less communication and computation overhead. We present the design details and security analysis of the proposed protocol. We also integrate our proposed protocol into the existing Wi-Fi Direct protocol, and implement it using Android smartphones.",
"title": ""
},
{
"docid": "685b1471c334c941507ac12eb6680872",
"text": "Purpose – The concept of ‘‘knowledge’’ is presented in diverse and sometimes even controversial ways in the knowledge management (KM) literature. The aim of this paper is to identify the emerging views of knowledge and to develop a framework to illustrate the interrelationships of the different knowledge types. Design/methodology/approach – This paper is a literature review to explore how ‘‘knowledge’’ as a central concept is presented and understood in a selected range of KM publications (1990-2004). Findings – The exploration of the knowledge landscape showed that ‘‘knowledge’’ is viewed in four emerging and complementary ways. The ontological, epistemological, commodity, and community views of knowledge are discussed in this paper. The findings show that KM is still a young discipline and therefore it is natural to have different, sometimes even contradicting views of ‘‘knowledge’’ side by side in the literature. Practical implications – These emerging views of knowledge could be seen as opportunities for researchers to provide new contributions. However, this diversity and complexity call for careful and specific clarification of the researchers’ standpoint, for a clear statement of their views of knowledge. Originality/value – This paper offers a framework as a compass for researchers to help their orientation in the confusing and ever changing landscape of knowledge.",
"title": ""
},
{
"docid": "7c050db718a21009908655cc99705d35",
"text": "a Department of Communication, Management Science and Systems, 333 Lord Christopher Baldy Hall, State University of New York at Buffalo, Buffalo, NY 14260, United States b Department of Finance, Operations and Information Systems, Brock University, Canada c Department of Information Systems and Operations Management, Ball State University, United States d Department of Information Systems and Operations Management, University of Texas at Arlington, United States e Management Science and Systems, State University of New York at Buffalo, United States",
"title": ""
},
{
"docid": "c472282e37efa603d1fef03f33ae258e",
"text": "This research established collective nostalgia as a group-level emotion and ascertained the benefits it confers on the group. In Study 1, participants who reflected on a nostalgic event they had experienced together with ingroup members (collective nostalgia) evaluated the ingroup more positively and reported stronger intentions to approach (and not avoid) ingroup members than those who recalled a nostalgic event they had experienced individually (personal nostalgia), those who reflected on a lucky event they had experienced together with ingroup members (collective positive), and those who did not recall an event (no recall). In Study 2, collective (vs. personal) nostalgia strengthened behavioral intentions to support the ingroup more so than did recalling an ordinary collective (vs. personal) event. Increased collective self-esteem mediated this effect. In Study 3, collective nostalgia (compared with recall of an ordinary collective event) led participants to sacrifice money in order to punish a transgression perpetrated against an ingroup member. This effect of collective nostalgia was more pronounced when social identification was high (compared with low). Finally, in Study 4, collective nostalgia converged toward the group average (i.e., was socially shared) when participants thought of themselves in terms of their group membership. The findings underscore the viability of studying nostalgia at multiple levels of analysis and highlight the significance of collective nostalgia for understanding group-level attitudes, global action tendencies, specific behavioral intentions, and behavior.",
"title": ""
},
{
"docid": "05f2a86b58758d2b9fbdbd4ecdde01b2",
"text": "In this paper we discuss the use of Data Mining to provide a solution to the problem of cross-sales. We define and analyse the cross-sales problem and develop a hybrid methodology to solve it, using characteristic rule discovery and deviation detection. Deviation detection is used as a measure of interest to filter out the less interesting characteristic roles and only retain the best characteristic rules discovered. The effect of domain knowledge on the interestingness value of the discovered rules is discussed and techniques for relining the knowledge to increase this interestingness measure are studied. We also investigate the use of externally procured lifestyle and other survey data for data enrichment and discuss its use as additional domain knowledge. The developed methodology has been applied to a real world cross-sales problem within the financial sector, and the results are also presented in this paper. Although the application described is in the financial sector, the methodology is generic in nature and can be applied to other sectors. © 1998 Elsevier Science B.V. All rights reserved. Kevwords: Cross-sales: Data Mining; Characteristic rule discovery: Deviation detection",
"title": ""
},
{
"docid": "a466b8da35f820eaaf597e1768b3e3f4",
"text": "The Internet of Things technology has been widely used in the quality tracking of agricultural products, however, the safety of storage for tracked data is still a serious challenge. Recently, with the expansion of blockchain technology applied in cross-industry field, the unchangeable features of its stored data provide us new vision about ensuring the storage safety for tracked data. Unfortunately, when the blockchain technology is directly applied in agricultural products tracking and data storage, it is difficult to automate storage and obtain the hash data stored in the blockchain in batches base on the identity. Addressing this issue, we propose a double-chain storage structure, and design a secured data storage scheme for tracking agricultural products based on blockchain. Specifically, the chained data structure is utilized to store the blockchain transaction hash, and together with the chain of the blockchain to form a double-chain storage, which ensures the data of agricultural products will not be maliciously tampered or destructed. Finally, in the practical application system, we verify the correctness and security of the proposed storage scheme.",
"title": ""
},
{
"docid": "347278d002cdea4fe830b5d1a6b7bc62",
"text": "The question of what function is served by the cortical column has occupied neuroscientists since its original description some 60years ago. The answer seems tractable in the somatosensory cortex when considering the inputs to the cortical column and the early stages of information processing, but quickly breaks down once the multiplicity of output streams and their sub-circuits are brought into consideration. This article describes the early stages of information processing in the barrel cortex, through generation of the center and surround receptive field components of neurons that subserve integration of multi whisker information, before going on to consider the diversity of properties exhibited by the layer 5 output neurons. The layer 5 regular spiking (RS) neurons differ from intrinsic bursting (IB) neurons in having different input connections, plasticity mechanisms and corticofugal projections. In particular, layer 5 RS cells employ noise reduction and homeostatic plasticity mechanism to preserve and even increase information transfer, while IB cells use more conventional Hebbian mechanisms to achieve a similar outcome. It is proposed that the rodent analog of the dorsal and ventral streams, a division reasonably well established in primate cortex, might provide a further level of organization for RS cell function and hence sub-circuit specialization.",
"title": ""
},
{
"docid": "d57399324370a905c28067f3f425ce57",
"text": "In this work we pursue a data-driven approach to the problem of estimating surface normals from a single intensity image, focusing in particular on human faces. We introduce new methods to exploit the currently available facial databases for dataset construction and tailor a deep convolutional neural network to the task of estimating facial surface normals in-the-wild. We train a fully convolutional network that can accurately recover facial normals from images including a challenging variety of expressions and facial poses. We compare against state-of-the-art face Shape-from-Shading and 3D reconstruction techniques and show that the proposed network can recover substantially more accurate and realistic normals. Furthermore, in contrast to other existing face-specific surface recovery methods, we do not require the solving of an explicit alignment step due to the fully convolutional nature of our network.",
"title": ""
},
{
"docid": "4ddb0d4bf09dc9244ee51d4b843db5f2",
"text": "BACKGROUND\nMobile applications (apps) have potential for helping people increase their physical activity, but little is known about the behavior change techniques marketed in these apps.\n\n\nPURPOSE\nThe aim of this study was to characterize the behavior change techniques represented in online descriptions of top-ranked apps for physical activity.\n\n\nMETHODS\nTop-ranked apps (n=167) were identified on August 28, 2013, and coded using the Coventry, Aberdeen and London-Revised (CALO-RE) taxonomy of behavior change techniques during the following month. Analyses were conducted during 2013.\n\n\nRESULTS\nMost descriptions of apps incorporated fewer than four behavior change techniques. The most common techniques involved providing instruction on how to perform exercises, modeling how to perform exercises, providing feedback on performance, goal-setting for physical activity, and planning social support/change. A latent class analysis revealed the existence of two types of apps, educational and motivational, based on their configurations of behavior change techniques.\n\n\nCONCLUSIONS\nBehavior change techniques are not widely marketed in contemporary physical activity apps. Based on the available descriptions and functions of the observed techniques in contemporary health behavior theories, people may need multiple apps to initiate and maintain behavior change. This audit provides a starting point for scientists, developers, clinicians, and consumers to evaluate and enhance apps in this market.",
"title": ""
},
{
"docid": "768ed187f94163727afd011817a306c6",
"text": "Although interest regarding the role of dispositional affect in job behaviors has surged in recent years, the true magnitude of affectivity's influence remains unknown. To address this issue, the authors conducted a qualitative and quantitative review of the relationships between positive and negative affectivity (PA and NA, respectively) and various performance dimensions. A series of meta-analyses based on 57 primary studies indicated that PA and NA predicted task performance in the hypothesized directions and that the relationships were strongest for subjectively rated versus objectively rated performance. In addition, PA was related to organizational citizenship behaviors but not withdrawal behaviors, and NA was related to organizational citizenship behaviors, withdrawal behaviors, counterproductive work behaviors, and occupational injury. Mediational analyses revealed that affect operated through different mechanisms in influencing the various performance dimensions. Regression analyses documented that PA and NA uniquely predicted task performance but that extraversion and neuroticism did not, when the four were considered simultaneously. Discussion focuses on the theoretical and practical implications of these findings. (PsycINFO Database Record (c) 2009 APA, all rights reserved).",
"title": ""
},
{
"docid": "255de21131ccf74c3269cc5e7c21820b",
"text": "This paper discusses the effect of driving current on frequency response of the two types of light emitting diodes (LEDs), namely, phosphor-based LED and single color LED. The experiments show that the influence of the change of driving current on frequency response of phosphor-based LED is not obvious compared with the single color LED(blue, red and green). The experiments also find that the bandwidth of the white LED was expanded from 1MHz to 32MHz by the pre-equalization strategy and 26Mbit/s transmission speed was taken under Bit Error Ratio of 7.55×10-6 within 3m by non-return-to-zero on-off-keying modulation. Especially, the frequency response intensity of the phosphor-based LED is little influenced by the fluctuation of the driving current, which meets the requirements that the indoor light source needs to be adjusted in real-time by driving current. As the bandwidth of the single color LED is changed by the driving current obviously, the LED modulation bandwidth should be calculated according to the minimum driving current while we consider the requirement of the VLC transmission speed.",
"title": ""
},
{
"docid": "1b3b2b8872d3b846120502a7a40e03d0",
"text": "A viable fully on-line adaptive brain computer interface (BCI) is introduced. On-line experiments with nine naive and able-bodied subjects were carried out using a continuously adaptive BCI system. The data were analyzed and the viability of the system was studied. The BCI was based on motor imagery, the feature extraction was performed with an adaptive autoregressive model and the classifier used was an adaptive quadratic discriminant analysis. The classifier was on-line updated by an adaptive estimation of the information matrix (ADIM). The system was also able to provide continuous feedback to the subject. The success of the feedback was studied analyzing the error rate and mutual information of each session and this analysis showed a clear improvement of the subject's control of the BCI from session to session.",
"title": ""
},
{
"docid": "5e0c3d5bffa57cf265b23d037d141b5e",
"text": "A processing unit in a computer system is composed of a data path and a control unit. The data path includes registers, function units such as ALUs (arithmetic and logic units) and shifters, interface units for main memory and/or I/O busses, and internal processor busses. The control unit governs the series of steps taken by the data path during the execution of a user-visible instruction, or macroinstruction (e.g., load, add, store, conditional branch).",
"title": ""
},
{
"docid": "c7351e8ce6d32b281d5bd33b245939c6",
"text": "In TREC 2002 the Berkeley group participated only in the English-Arabic cross-language retrieval (CLIR) track. One Arabic monolingual run and three English-Arabic cross-language runs were submitted. Our approach to the crosslanguage retrieval was to translate the English topics into Arabic using online English-Arabic machine translation systems. The four official runs are named as BKYMON, BKYCL1, BKYCL2, and BKYCL3. The BKYMON is the Arabic monolingual run, and the other three runs are English-to-Arabic cross-language runs. This paper reports on the construction of an Arabic stoplist and two Arabic stemmers, and the experiments on Arabic monolingual retrieval, English-to-Arabic cross-language retrieval.",
"title": ""
},
{
"docid": "32e71f6ea2a624d669dfbb7a52042432",
"text": "In this paper, a design method of an ultra-wideband multi-section power divider on suspended stripline (SSL) is presented. A clear design guideline for ultra-wideband power dividers is provided. As a design example, a 10-section SSL power divider is implemented. The fabricated divider exhibits the minimum insertion loss of 0.3 dB, the maximum insertion loss of 1.5 dB from 1 to 19 GHz. The measured VSWR is typically 1.40:1, and the isolation between output-port is typically 20 dB.",
"title": ""
},
{
"docid": "f68161697aed6d12598b0b9e34aeae68",
"text": "Automation in agriculture comes into play to increase productivity, quality and economic growth of the country. Fruit grading is an important process for producers which affects the fruits quality evaluation and export market. Although the grading and sorting can be done by the human, but it is slow, labor intensive, error prone and tedious. Hence, there is a need of an intelligent fruit grading system. In recent years, researchers had developed numerous algorithms for fruit sorting using computer vision. Color, textural and morphological features are the most commonly used to identify the diseases, maturity and class of the fruits. Subsequently, these features are used to train soft computing technique network. In this paper, use of image processing in agriculture has been reviewed so as to provide an insight to the use of vision based systems highlighting their advantages and disadvantages.",
"title": ""
},
{
"docid": "8f0a6769c33531594819ae8f47a42337",
"text": "Multirotor unmanned aerial vehicles (UAVs) are rapidly gaining popularity for many applications. However, safe operation in partially unknown, unstructured environments remains an open question. In this paper, we present a continuous-time trajectory optimization method for real-time collision avoidance on multirotor UAVs. We then propose a system where this motion planning method is used as a local replanner, that runs at a high rate to continuously recompute safe trajectories as the robot gains information about its environment. We validate our approach by comparing against existing methods and demonstrate the complete system avoiding obstacles on a multirotor UAV platform.",
"title": ""
}
] |
scidocsrr
|
3e00cf3486170e2ba31220c925a6b526
|
Towards Safe Autonomous Driving: Capture Uncertainty in the Deep Neural Network For Lidar 3D Vehicle Detection
|
[
{
"docid": "4fc6ac1b376c965d824b9f8eb52c4b50",
"text": "Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as -greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.",
"title": ""
},
{
"docid": "b97e58184a94d6827bf294a3b1f91687",
"text": "A good and robust sensor data fusion in diverse weather conditions is a quite challenging task. There are several fusion architectures in the literature, e.g. the sensor data can be fused right at the beginning (Early Fusion), or they can be first processed separately and then concatenated later (Late Fusion). In this work, different fusion architectures are compared and evaluated by means of object detection tasks, in which the goal is to recognize and localize predefined objects in a stream of data. Usually, state-of-the-art object detectors based on neural networks are highly optimized for good weather conditions, since the well-known benchmarks only consist of sensor data recorded in optimal weather conditions. Therefore, the performance of these approaches decreases enormously or even fails in adverse weather conditions. In this work, different sensor fusion architectures are compared for good and adverse weather conditions for finding the optimal fusion architecture for diverse weather situations. A new training strategy is also introduced such that the performance of the object detector is greatly enhanced in adverse weather scenarios or if a sensor fails. Furthermore, the paper responds to the question if the detection accuracy can be increased further by providing the neural network with a-priori knowledge such as the spatial calibration of the sensors.",
"title": ""
}
] |
[
{
"docid": "639c8142b14f0eed40b63c0fa7580597",
"text": "The purpose of this study is to give an overlook and comparison of best known data warehouse architectures. Single-layer, two-layer, and three-layer architectures are structure-oriented one that are depending on the number of layers used by the architecture. In independent data marts architecture, bus, hub-and-spoke, centralized and distributed architectures, the main layers are differently combined. Listed data warehouse architectures are compared based on organizational structures, with its similarities and differences. The second comparison gives a look into information quality (consistency, completeness, accuracy) and system quality (integration, flexibility, scalability). Bus, hub-and-spoke and centralized data warehouse architectures got the highest scores in information and system quality assessment.",
"title": ""
},
{
"docid": "3097273de70077bac4a56b3f7e7b0ed4",
"text": "Transverse flux machine (TFM) useful for in-wheel motor applications is presented. This transverse flux permanent magnet motor is designed to achieve high torque-to-weight ratio and is suitable for direct-drive wheel applications. As in conventional TFM, the phases are located under each other, which will increase the axial length of the machine. The idea of this design is to reduce the axial length of TFM, by placing the windings around the stator and by shifting those from each other by electrically 120° or 90°, for three- or two-phase machine, respectively. Therefore, a remarkable reduction on the total axial length of the machine will be achieved while keeping the torque density high. This TFM is compared to another similar TFM, in which the three phases have been divided into two halves and placed opposite each other to ensure the mechanical balance and stability of the stator. The corresponding mechanical phase shifts between the phases have accordingly been taken into account. The motors are modelled in finite-element method (FEM) program, Flux3D, and designed to meet the specifications of an optimisation scheme, subject to certain constraints, such as construction dimensions, electric and magnetic loading. Based on this comparison study, many recommendations have been suggested to achieve optimum results.",
"title": ""
},
{
"docid": "9b2dd28151751477cc46f6c6d5ec475f",
"text": "Clinical and experimental data indicate that most acupuncture clinical results are mediated by the central nervous system, but the specific effects of acupuncture on the human brain remain unclear. Even less is known about its effects on the cerebellum. This fMRI study demonstrated that manual acupuncture at ST 36 (Stomach 36, Zusanli), a main acupoint on the leg, modulated neural activity at multiple levels of the cerebro-cerebellar and limbic systems. The pattern of hemodynamic response depended on the psychophysical response to needle manipulation. Acupuncture stimulation typically elicited a composite of sensations termed deqi that is related to clinical efficacy according to traditional Chinese medicine. The limbic and paralimbic structures of cortical and subcortical regions in the telencephalon, diencephalon, brainstem and cerebellum demonstrated a concerted attenuation of signal intensity when the subjects experienced deqi. When deqi was mixed with sharp pain, the hemodynamic response was mixed, showing a predominance of signal increases instead. Tactile stimulation as control also elicited a predominance of signal increase in a subset of these regions. The study provides preliminary evidence for an integrated response of the human cerebro-cerebellar and limbic systems to acupuncture stimulation at ST 36 that correlates with the psychophysical response.",
"title": ""
},
{
"docid": "4be9ae4bc6fb01e78d550bedf199d0b0",
"text": "Protein timing is a popular dietary strategy designed to optimize the adaptive response to exercise. The strategy involves consuming protein in and around a training session in an effort to facilitate muscular repair and remodeling, and thereby enhance post-exercise strength- and hypertrophy-related adaptations. Despite the apparent biological plausibility of the strategy, however, the effectiveness of protein timing in chronic training studies has been decidedly mixed. The purpose of this paper therefore was to conduct a multi-level meta-regression of randomized controlled trials to determine whether protein timing is a viable strategy for enhancing post-exercise muscular adaptations. The strength analysis comprised 478 subjects and 96 ESs, nested within 41 treatment or control groups and 20 studies. The hypertrophy analysis comprised 525 subjects and 132 ESs, nested with 47 treatment or control groups and 23 studies. A simple pooled analysis of protein timing without controlling for covariates showed a small to moderate effect on muscle hypertrophy with no significant effect found on muscle strength. In the full meta-regression model controlling for all covariates, however, no significant differences were found between treatment and control for strength or hypertrophy. The reduced model was not significantly different from the full model for either strength or hypertrophy. With respect to hypertrophy, total protein intake was the strongest predictor of ES magnitude. These results refute the commonly held belief that the timing of protein intake in and around a training session is critical to muscular adaptations and indicate that consuming adequate protein in combination with resistance exercise is the key factor for maximizing muscle protein accretion.",
"title": ""
},
{
"docid": "153a22e4477a0d6ce98b9a0fba2ab595",
"text": "Uninterruptible power supplies (UPSs) have been used in many installations for critical loads that cannot afford power failure or surge during operation. It is often difficult to upgrade the UPS system as the load grows over time. Due to lower cost and maintenance, as well as ease of increasing system capacity, the parallel operation of modularized small-power UPS has attracted much attention in recent years. In this paper, a new scheme for parallel operation of inverters is introduced. A multiple-input-multiple-output state-space model is developed to describe the parallel-connected inverters system, and a model-predictive-control scheme suitable for paralleled inverters control is proposed. In this algorithm, the control objectives of voltage tracking and current sharing are formulated using a weighted cost function. The effectiveness and the hot-swap capability of the proposed parallel-connected inverters system have been verified with experimental results.",
"title": ""
},
{
"docid": "a68244dedee73f87103a1e05a8c33b20",
"text": "Given the knowledge that the same or similar objects appear in a set of images, our goal is to simultaneously segment that object from the set of images. To solve this problem, known as the cosegmentation problem, we present a method based upon hierarchical clustering. Our framework first eliminates intra-class heterogeneity in a dataset by clustering similar images together into smaller groups. Then, from each image, our method extracts multiple levels of segmentation and creates connections between regions (e.g. superpixel) across levels to establish intra-image multi-scale constraints. Next we take advantage of the information available from other images in our group. We design and present an efficient method to create inter-image relationships, e.g. connections between image regions from one image to all other images in an image cluster. Given the intra & inter-image connections, we perform a segmentation of the group of images into foreground and background regions. Finally, we compare our segmentation accuracy to several other state-of-the-art segmentation methods on standard datasets, and also demonstrate the robustness of our method on real world data.",
"title": ""
},
{
"docid": "489e4bab8e975d9d82380adcd1692385",
"text": "Nonnegative tucker decomposition (NTD) is a recent multiway extension of nonnegative matrix factorization (NMF), where nonnega- tivity constraints are incorporated into Tucker model. In this paper we consider alpha-divergence as a discrepancy measure and derive multiplicative updating algorithms for NTD. The proposed multiplicative algorithm includes some existing NMF and NTD algorithms as its special cases, since alpha-divergence is a one-parameter family of divergences which accommodates KL-divergence, Hellinger divergence, X2 divergence, and so on. Numerical experiments on face images show how different values of alpha affect the factorization results under different types of noise.",
"title": ""
},
{
"docid": "9f97fffcb1b0a1f92443c9c769438cf5",
"text": "A literature review was done within a revision of a guideline concerned with data quality management in registries and cohort studies. The review focused on quality indicators, feedback, and source data verification. Thirty-nine relevant articles were selected in a stepwise selection process. The majority of the papers dealt with indicators. The papers presented concepts or data analyses. The leading indicators were related to case or data completeness, correctness, and accuracy. In the future, data pools as well as research reports from quantitative studies should be obligatory supplemented by information about their data quality, ideally picking up some indicators presented in this review.",
"title": ""
},
{
"docid": "b1f348ff63eaa97f6eeda5fcd81330a9",
"text": "The recent expansion of the cloud computing paradigm has motivated educators to include cloud-related topics in computer science and computer engineering curricula. While programming and algorithm topics have been covered in different undergraduate and graduate courses, cloud architecture/system topics are still not usually studied in academic contexts. But design, deployment and management of datacenters, virtualization technologies for cloud, cloud management tools and similar issues should be addressed in current computer science and computer engineering programs. This work presents our approach and experiences in designing and implementing a curricular module covering all these topics. In this approach the utilization of a simulation tool, CloudSim, is essential to allow the students a practical approximation to the course contents.",
"title": ""
},
{
"docid": "37de72b0e9064d09fb6901b40d695c0a",
"text": "BACKGROUND AND OBJECTIVES\nVery little is known about the use of probiotics among pregnant women with gestational diabetes mellitus (GDM) especially its effect on oxidative stress and inflammatory indices. The aim of present study was to measure the effect of a probiotic supplement capsule on inflammation and oxidative stress biomarkers in women with newly-diagnosed GDM.\n\n\nMETHODS AND STUDY DESIGN\n64 pregnant women with GDM were enrolled in a double-blind placebo controlled randomized clinical trial in the spring and summer of 2014. They were randomly assigned to receive either a probiotic containing four bacterial strains of Lactobacillus acidophilus LA-5, Bifidobacterium BB-12, Streptococcus Thermophilus STY-31 and Lactobacillus delbrueckii bulgaricus LBY-27 or placebo capsule for 8 consecutive weeks. Blood samples were taken pre- and post-treatment and serum indices of inflammation and oxidative stress were assayed. The measured mean response scales were then analyzed using mixed effects model. All statistical analysis was performed using Statistical Package for Social Sciences (SPSS) software (version 16).\n\n\nRESULTS\nSerum high-sensitivity C-reactive protein and tumor necrosis factor-α levels improved in the probiotic group to a statistically significant level over the placebo group. Serum interleukin-6 levels decreased in both groups after intervention; however, neither within group nor between group differences interleukin-6 serum levels was statistically significant. Malondialdehyde, glutathione reductase and erythrocyte glutathione peroxidase levels improved significantly with the use of probiotics when compared with the placebo.\n\n\nCONCLUSIONS\nThe probiotic supplement containing L.acidophilus LA- 5, Bifidobacterium BB- 12, S.thermophilus STY-31 and L.delbrueckii bulgaricus LBY-2 appears to improve several inflammation and oxidative stress biomarkers in women with GDM.",
"title": ""
},
{
"docid": "bbb1dc09e41e08e095a48e9e2a806356",
"text": "Using the inexpensive Raspberry Pi to automate the tasks at home such as switching appliances on & off over Wi-Fi (Wireless Fidelity) or LAN(Local Area Network) using a personal computer or a mobile or a tablet through the browser. This can also be done by using the dedicated Android application. The conventional switch boards will be added with a touch screen or replaced with a touch screen to match the taste of the user's home decor. PIR (Passive Infrared Sensor) sensor will be used to detect human detection and automate the on and off functionality.",
"title": ""
},
{
"docid": "6d329c1fa679ac201387c81f59392316",
"text": "Mosquitoes represent the major arthropod vectors of human disease worldwide transmitting malaria, lymphatic filariasis, and arboviruses such as dengue virus and Zika virus. Unfortunately, no treatment (in the form of vaccines or drugs) is available for most of these diseases andvectorcontrolisstillthemainformofprevention. Thelimitationsoftraditionalinsecticide-based strategies, particularly the development of insecticide resistance, have resulted in significant efforts to develop alternative eco-friendly methods. Biocontrol strategies aim to be sustainable and target a range of different mosquito species to reduce the current reliance on insecticide-based mosquito control. In thisreview, weoutline non-insecticide basedstrategiesthat havebeenimplemented orare currently being tested. We also highlight the use of mosquito behavioural knowledge that can be exploited for control strategies.",
"title": ""
},
{
"docid": "827ecd05ff323a45bf880a65f34494e9",
"text": "BACKGROUND\nSocial support can be a critical component of how a woman adjusts to infertility, yet few studies have investigated its impact on infertility-related coping and stress. We examined relationships between social support contexts and infertility stress domains, and tested if they were mediated by infertility-related coping strategies in a sample of infertile women.\n\n\nMETHODS\nThe Multidimensional Scale of Perceived Social Support, the Copenhagen Multi-centre Psychosocial Infertility coping scales and the Fertility Problem Inventory were completed by 252 women seeking treatment. Structural equation modeling analysis was used to test the hypothesized multiple mediation model.\n\n\nRESULTS\nThe final model revealed negative effects from perceived partner support to relationship concern (β = -0.47), sexual concern (β = -0.20) and rejection of childfree lifestyle through meaning-based coping (β = -0.04). Perceived friend support had a negative effect on social concern through active-confronting coping (β = -0.04). Finally, besides a direct negative association with social concern (β = -0.30), perceived family support was indirectly and negatively related with all infertility stress domains (β from -0.04 to -0.13) through a positive effect of active-avoidance coping. The model explained between 12 and 66% of the variance of outcomes.\n\n\nCONCLUSIONS\nDespite being limited by a convenience sampling and cross-sectional design, results highlight the importance of social support contexts in helping women deal with infertility treatment. Health professionals should explore the quality of social networks and encourage seeking positive support from family and partners. Findings suggest it might prove useful for counselors to use coping skills training interventions, by retraining active-avoidance coping into meaning-based and active-confronting strategies.",
"title": ""
},
{
"docid": "6d110ceb82878e13014ee9b9ab63a7d1",
"text": "The fuzzy control algorithm that carries on the intelligent control twelve phases three traffic lanes single crossroads traffic light, works well in the real-time traffic flow under flexible operation. The procedures can be described as below: first, the number of vehicles of all the lanes can be received through the sensor, and the phase with the largest number is stipulated to be highest priority, while the phase turns to the next one from the previous, it transfers into the highest priority. Then the best of the green light delay time can be figured out under the fuzzy rules reasoning on the current waiting formation length and general formation length. The simulation result indicates the fuzzy control method on vehicle delay time compared with the traditional timed control method is greatly improved.",
"title": ""
},
{
"docid": "d2fb10bdbe745ace3a2512ccfa414d4c",
"text": "In cloud computing environment, especially in big data era, adversary may use data deduplication service supported by the cloud service provider as a side channel to eavesdrop users' privacy or sensitive information. In order to tackle this serious issue, in this paper, we propose a secure data deduplication scheme based on differential privacy. The highlights of the proposed scheme lie in constructing a hybrid cloud framework, using convergent encryption algorithm to encrypt original files, and introducing differential privacy mechanism to resist against the side channel attack. Performance evaluation shows that our scheme is able to effectively save network bandwidth and disk storage space during the processes of data deduplication. Meanwhile, security analysis indicates that our scheme can resist against the side channel attack and related files attack, and prevent the disclosure of privacy information.",
"title": ""
},
{
"docid": "aecacf7d1ba736899f185ee142e32522",
"text": "BACKGROUND\nLow rates of handwashing compliance among nurses are still reported in literature. Handwashing beliefs and attitudes were found to correlate and predict handwashing practices. However, such an important field is not fully explored in Jordan.\n\n\nOBJECTIVES\nThis study aims at exploring Jordanian nurses' handwashing beliefs, attitudes, and compliance and examining the predictors of their handwashing compliance.\n\n\nMETHODS\nA cross-sectional multicenter survey design was used to collect data from registered nurses and nursing assistants (N = 198) who were providing care to patients in governmental hospitals in Jordan. Data collection took place over 3 months during the period of February 2011 to April 2011 using the Handwashing Assessment Inventory.\n\n\nRESULTS\nParticipants' mean score of handwashing compliance was 74.29%. They showed positive attitudes but seemed to lack knowledge concerning handwashing. Analysis revealed a 5-predictor model, which accounted for 37.5% of the variance in nurses' handwashing compliance. Nurses' beliefs relatively had the highest prediction effects (β = .309, P < .01), followed by skin assessment (β = .290, P < .01).\n\n\nCONCLUSION\nJordanian nurses reported moderate handwashing compliance and were found to lack knowledge concerning handwashing protocols, for which education programs are recommended. This study raised the awareness regarding the importance of complying with handwashing protocols.",
"title": ""
},
{
"docid": "03dcfd0b89b7eee84d678371c13e97c2",
"text": "Recommender systems oen use latent features to explain the behaviors of users and capture the properties of items. As users interact with dierent items over time, user and item features can inuence each other, evolve and co-evolve over time. e compatibility of user and item’s feature further inuence the future interaction between users and items. Recently, point process based models have been proposed in the literature aiming to capture the temporally evolving nature of these latent features. However, these models oen make strong parametric assumptions about the evolution process of the user and item latent features, which may not reect the reality, and has limited power in expressing the complex and nonlinear dynamics underlying these processes. To address these limitations, we propose a novel deep coevolutionary network model (DeepCoevolve), for learning user and item features based on their interaction graph. DeepCoevolve use recurrent neural network (RNN) over evolving networks to dene the intensity function in point processes, which allows the model to capture complex mutual inuence between users and items, and the feature evolution over time. We also develop an ecient procedure for training the model parameters, and show that the learned models lead to signicant improvements in recommendation and activity prediction compared to previous state-of-the-arts parametric models.",
"title": ""
},
{
"docid": "d1668503d8986884035c8784d1f3f426",
"text": "Feature extraction is a classic problem of machine vision and image processing. Edges are often detected using integer-order differential operators. In this paper, a one-dimensional digital fractional-order Charef differentiator (1D-FCD) is introduced and extended to 2D by a multi-directional operator. The obtained 2D-fractional differentiation (2D-FCD) is a new edge detection operation. The computed multi-directional mask coefficients are computed in a way that image details are detected and preserved. Experiments on texture images have demonstrated the efficiency of the proposed filter compared to existing techniques.",
"title": ""
},
{
"docid": "d050730d7a5bd591b805f1b9729b0f2d",
"text": "In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets.",
"title": ""
},
{
"docid": "65a9813786554ede5e3c36f62b345ad8",
"text": "Web search queries provide a surprisingly large amount of information, which can be potentially organized and converted into a knowledgebase. In this paper, we focus on the problem of automatically identifying brand and product entities from a large collection of web queries in online shopping domain. We propose an unsupervised approach based on adaptor grammars that does not require any human annotation efforts nor rely on any external resources. To reduce the noise and normalize the query patterns, we introduce a query standardization step, which groups multiple search patterns and word orderings together into their most frequent ones. We present three different sets of grammar rules used to infer query structures and extract brand and product entities. To give an objective assessment of the performance of our approach, we conduct experiments on a large collection of online shopping queries and intrinsically evaluate the knowledgebase generated by our method qualitatively and quantitatively. In addition, we also evaluate our framework on extrinsic tasks on query tagging and chunking. Our empirical studies show that the knowledgebase discovered by our approach is highly accurate, has good coverage and significantly improves the performance on the external tasks.",
"title": ""
}
] |
scidocsrr
|
dd370141ec6590bd8fee82b45d186d9c
|
An intelligent system approach to higher-dimensional classification of volume data
|
[
{
"docid": "63baa6371fc07d3ef8186f421ddf1070",
"text": "With the first few words of Neural Networks and Intellect: Using Model-Based Concepts, Leonid Perlovsky embarks on the daring task of creating a mathematical concept of “the mind.” The content of the book actually exceeds even the most daring of expectations. A wide variety of concepts are linked together intertwining the development of artificial intelligence, evolutionary computation, and even the philosophical observations ranging from Aristotle and Plato to Kant and Gvdel. Perlovsky discusses fundamental questions with a number of engineering applications to filter them through philosophical categories (both ontological and epistemological). In such a fashion, the inner workings of the human mind, consciousness, language-mind relationships, learning, and emotions are explored mathematically in amazing details. Perlovsky even manages to discuss the concept of beauty perception in mathematical terms. Beginners will appreciate that Perlovsky starts with the basics. The first chapter contains an introduction to probability, statistics, and pattern recognition, along with the intuitive explanation of the complicated mathematical concepts. The second chapter reviews numerous mathematical approaches, algorithms, neural networks, and the fundamental mathematical ideas underlying each method. It analyzes fundamental limitations of the nearest neighbor methods and the simple neural network. Vapnik’s statistical learning theory, support vector machines, and Grossberg’s neural field theories are clearly explained. Roles of hierarchical organization and evolutionary computation are analyzed. Even experts in the field might find interesting the relationships among various algorithms and approaches. Fundamental mathematical issues include origins of combinatorial complexity (CC) of many algorithms and neural networks (operations or training) and its relationship to di-",
"title": ""
}
] |
[
{
"docid": "665fcc17971dc34ed6f89340e3b7bfe2",
"text": "Central to the development of computer vision systems is the collection and use of annotated images spanning our visual world. Annotations may include information about the identity, spatial extent, and viewpoint of the objects present in a depicted scene. Such a database is useful for the training and evaluation of computer vision systems. Motivated by the availability of images on the Internet, we introduced a web-based annotation tool that allows online users to label objects and their spatial extent in images. To date, we have collected over 400 000 annotations that span a variety of different scene and object classes. In this paper, we show the contents of the database, its growth over time, and statistics of its usage. In addition, we explore and survey applications of the database in the areas of computer vision and computer graphics. Particularly, we show how to extract the real-world 3-D coordinates of images in a variety of scenes using only the user-provided object annotations. The output 3-D information is comparable to the quality produced by a laser range scanner. We also characterize the space of the images in the database by analyzing 1) statistics of the co-occurrence of large objects in the images and 2) the spatial layout of the labeled images.",
"title": ""
},
{
"docid": "c4df2361d80e8619e2d3d8b052ae2abc",
"text": "Learning from Demonstration (LfD) explores techniques for learning a task policy from examples provided by a human teacher. e field of LfD has grown into an extensive body of literature over the past 30 years, with a wide variety of approaches for encoding human demonstrations and modeling skills and tasks. Additionally, we have only recently seen a focus on gathering data from non-expert human teachers (i.e., domain experts but not robotics experts). In this book, we provide an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn from naive human teachers. We begin, in the introduction, with a unification of the various terminology seen in the literature as well as an outline of the design choices one has in designing an LfD system. Chapter 2 that gives a brief survey of psychology literature that provides insights from human social learning that are relevant to designing robotic social learners. Chapter 3 walks through an LfD interaction, surveying the design choices one makes and state of the art approaches in prior work. First, is the choice of input, how the human teacher interacts with the robot to provide demonstrations. Next, is the choice of modeling technique. Currently, there is a dichotomy in the field between approaches that model low-level motor skills and those that model high-level tasks composed of primitive actions. We devote a chapter to each of these. Chapter 7 on interactive and active learning approaches that allow the robot to refine an existing task model. And finally, Chapter 8 provides on best practices for evaluation of LfD systems, with a focus on how to approach experiments with human subjects for this domain.",
"title": ""
},
{
"docid": "fd22f81af03d9dbcd746ebdfed5277c6",
"text": "Numerous NLP applications rely on search-engine queries, both to extract information from and to compute statistics over the Web corpus. But search engines often limit the number of available queries. As a result, query-intensive NLP applications such as Information Extraction (IE) distribute their query load over several days, making IE a slow, offline process. This paper introduces a novel architecture for IE that obviates queries to commercial search engines. The architecture is embodied in a system called KNOWITNOW that performs high-precision IE in minutes instead of days. We compare KNOWITNOW experimentally with the previouslypublished KNOWITALL system, and quantify the tradeoff between recall and speed. KNOWITNOW’s extraction rate is two to three orders of magnitude higher than KNOWITALL’s. 1 Background and Motivation Numerous modern NLP applications use the Web as their corpus and rely on queries to commercial search engines to support their computation (Turney, 2001; Etzioni et al., 2005; Brill et al., 2001). Search engines are extremely helpful for several linguistic tasks, such as computing usage statistics or finding a subset of web documents to analyze in depth; however, these engines were not designed as building blocks for NLP applications. As a result, the applications are forced to issue literally millions of queries to search engines, which limits the speed, scope, and scalability of the applications. Further, the applications must often then fetch some web documents, which at scale can be very time-consuming. In response to heavy programmatic search engine use, Google has created the “Google API” to shunt programmatic queries away from Google.com and has placed hard quotas on the number of daily queries a program can issue to the API. Other search engines have also introduced mechanisms to limit programmatic queries, forcing applications to introduce “courtesy waits” between queries and to limit the number of queries they issue. To understand these efficiency problems in more detail, consider the KNOWITALL information extraction system (Etzioni et al., 2005). KNOWITALL has a generateand-test architecture that extracts information in two stages. First, KNOWITALL utilizes a small set of domainindependent extraction patterns to generate candidate facts (cf. (Hearst, 1992)). For example, the generic pattern “NP1 such as NPList2” indicates that the head of each simple noun phrase (NP) in NPList2 is a member of the class named in NP1. By instantiating the pattern for class City, KNOWITALL extracts three candidate cities from the sentence: “We provide tours to cities such as Paris, London, and Berlin.” Note that it must also fetch each document that contains a potential candidate. Next, extending the PMI-IR algorithm (Turney, 2001), KNOWITALL automatically tests the plausibility of the candidate facts it extracts using pointwise mutual information (PMI) statistics computed from search-engine hit counts. For example, to assess the likelihood that “Yakima” is a city, KNOWITALL will compute the PMI between Yakima and a set of k discriminator phrases that tend to have high mutual information with city names (e.g., the simple phrase “city”). Thus, KNOWITALL requires at least k search-engine queries for every candidate extraction it assesses. Due to KNOWITALL’s dependence on search-engine queries, large-scale experiments utilizing KNOWITALL take days and even weeks to complete, which makes research using KNOWITALL slow and cumbersome. Private access to Google-scale infrastructure would provide sufficient access to search queries, but at prohibitive cost, and the problem of fetching documents (even if from a cached copy) would remain (as we discuss in Section 2.1). Is there a feasible alternative Web-based IE system? If so, what size Web index and how many machines are required to achieve reasonable levels of precision/recall? What would the architecture of this IE system look like, and how fast would it run? To address these questions, this paper introduces a novel architecture for web information extraction. It consists of two components that supplant the generateand-test mechanisms in KNOWITALL. To generate extractions rapidly we utilize our own specialized search engine, called the Bindings Engine (or BE), which efficiently returns bindings in response to variabilized queries. For example, in response to the query “Cities such as ProperNoun(Head(〈NounPhrase〉))”, BE will return a list of proper nouns likely to be city names. To assess these extractions, we use URNS, a combinatorial model, which estimates the probability that each extraction is correct without using any additional search engine queries.1 For further efficiency, we introduce an approximation to URNS, based on frequency of extractions’ occurrence in the output of BE, and show that it achieves comparable precision/recall to URNS. Our contributions are as follows: 1. We present a novel architecture for Information Extraction (IE), embodied in the KNOWITNOW system, which does not depend on Web search-engine queries. 2. We demonstrate experimentally that KNOWITNOW is the first system able to extract tens of thousands of facts from the Web in minutes instead of days. 3. We show that KNOWITNOW’s extraction rate is two to three orders of magnitude greater than KNOWITALL’s, but this increased efficiency comes at the cost of reduced recall. We quantify this tradeoff for KNOWITNOW’s 60,000,000 page index and extrapolate how the tradeoff would change with larger indices. Our recent work has described the BE search engine in detail (Cafarella and Etzioni, 2005), and also analyzed the URNS model’s ability to compute accurate probability estimates for extractions (Downey et al., 2005). However, this is the first paper to investigate the composition of these components to create a fast IE system, and to compare it experimentally to KNOWITALL in terms of time, In contrast, PMI-IR, which is built into KNOWITALL, requires multiple search engine queries to assess each potential extraction. recall, precision, and extraction rate. The frequencybased approximation to URNS and the demonstration of its success are also new. The remainder of the paper is organized as follows. Section 2 provides an overview of BE’s design. Section 3 describes the URNS model and introduces an efficient approximation to URNS that achieves similar precision/recall. Section 4 presents experimental results. We conclude with related and future work in Sections 5 and 6. 2 The Bindings Engine This section explains how relying on standard search engines leads to a bottleneck for NLP applications, and provides a brief overview of the Bindings Engine (BE)—our solution to this problem. A comprehensive description of BE appears in (Cafarella and Etzioni, 2005). Standard search engines are computationally expensive for IE and other NLP tasks. IE systems issue multiple queries, downloading all pages that potentially match an extraction rule, and performing expensive processing on each page. For example, such systems operate roughly as follows on the query (“cities such as 〈NounPhrase〉”): 1. Perform a traditional search engine query to find all URLs containing the non-variable terms (e.g., “cities such as”) 2. For each such URL: (a) obtain the document contents, (b) find the searched-for terms (“cities such as”) in the document text, (c) run the noun phrase recognizer to determine whether text following “cities such as” satisfies the linguistic type requirement, (d) and if so, return the string We can divide the algorithm into two stages: obtaining the list of URLs from a search engine, and then processing them to find the 〈NounPhrase〉 bindings. Each stage poses its own scalability and speed challenges. The first stage makes a query to a commercial search engine; while the number of available queries may be limited, a single one executes relatively quickly. The second stage fetches a large number of documents, each fetch likely resulting in a random disk seek; this stage executes slowly. Naturally, this disk access is slow regardless of whether it happens on a locally-cached copy or on a remote document server. The observation that the second stage is slow, even if it is executed locally, is important because it shows that merely operating a “private” search engine does not solve the problem (see Section 2.1). The Bindings Engine supports queries containing typed variables (such as NounPhrase) and string-processing functions (such as “head(X)” or “ProperNoun(X)”) as well as standard query terms. BE processes a variable by returning every possible string in the corpus that has a matching type, and that can be substituted for the variable and still satisfy the user’s query. If there are multiple variables in a query, then all of them must simultaneously have valid substitutions. (So, for example, the query “<NounPhrase> is located in <NounPhrase>” only returns strings when noun phrases are found on both sides of “is located in”.) We call a string that meets these requirements a binding for the variable in question. These queries, and the bindings they elicit, can usefully serve as part of an information extraction system or other common NLP tasks (such as gathering usage statistics). Figure 1 illustrates some of the queries that BE can handle. president Bush <Verb> cities such as ProperNoun(Head(<NounPhrase>)) <NounPhrase> is the CEO of <NounPhrase> Figure 1: Examples of queries that can be handled by BE. Queries that include typed variables and stringprocessing functions allow NLP tasks to be done efficiently without downloading the original document during query processing. BE’s novel neighborhood index enables it to process these queries with O(k) random disk seeks and O(k) serial disk reads, where k is the number of non-variable terms in its query. As a result, BE can yield orders of magnitude speedup as shown in the asymptotic analysis later in this section. The neighborhood index is an augme",
"title": ""
},
{
"docid": "9d45c1deaf429be2a5c33cd44b04290e",
"text": "In this paper, a new omni-directional driving system with one spherical wheel is proposed. This system is able to overcome the existing driving systems with structural limitations in vertical, horizontal and diagonal movement. This driving system was composed of two stepping motors, a spherical wheel covered by a ball bearing, a weight balancer for the elimination of eccentricity, and ball plungers for balance. All parts of this structure is located at same distance on the center because the center of gravity of this system must be placed at the center of the system. An own ball bearing was designed for settled rotation and smooth direction change of a spherical wheel. The principle of an own ball bearing is the reversal of the ball mouse. Steel as the material of ball in the own ball bearing, was used for the prevention the slip with ground. One of the stepping motors is used for driving the spherical wheel. This spherical wheel is stable because of the support of ball bearing. And the other enables to move in a wanted direction while it rotates based on the central axis. The ATmega128 chip is used for the control of two stepping motors. To verify the proposed system, driving experiments was executed in variety of environments. Finally, the performance and the validity of the omni-directional driving system were confirmed.",
"title": ""
},
{
"docid": "b31bae9e7c95e070318df8279cdd18d5",
"text": "This article focuses on the ethical analysis of cyber warfare, the warfare characterised by the deployment of information and communication technologies. It addresses the vacuum of ethical principles surrounding this phenomenon by providing an ethical framework for the definition of such principles. The article is divided in three parts. The first one considers cyber warfare in relation to the so-called information revolution and provides a conceptual analysis of this kind of warfare. The second part focuses on the ethical problems posed by cyber warfare and describes the issues that arise when Just War Theory is endorsed to address them. The final part introduces Information Ethics as a suitable ethical framework for the analysis of cyber warfare, and argues that the vacuum of ethical principles for this kind warfare is overcome when Just War Theory and Information Ethics are merged together.",
"title": ""
},
{
"docid": "5725d1abf54de1b48f60315dab13e5d4",
"text": "Identifying the optimal set of individuals to first receive information (`seeds') in a social network is a widely-studied question in many settings, such as the diffusion of information, microfinance programs, and new technologies. Numerous studies have proposed various network-centrality based heuristics to choose seeds in a way that is likely to boost diffusion. Here we show that, for some frequently studied diffusion processes, randomly seeding S + x individuals can prompt a larger cascade than optimally targeting the best S individuals, for a small x. We prove our results for large classes of random networks, but also show that they hold in simulations over several real-world networks. This suggests that the returns to collecting and analyzing network information to identify the optimal seeds may not be economically significant. Given these findings, practitioners interested in communicating a message to a large number of people may wish to compare the cost of network-based targeting to that of slightly expanding initial outreach.",
"title": ""
},
{
"docid": "6527c10c822c2446b7be928f86d3c8f8",
"text": "In this paper we present a novel algorithm for automatic analysis, transcription, and parameter extraction from isolated polyphonic guitar recordings. In addition to general score-related information such as note onset, duration, and pitch, instrumentspecific information such as the plucked string, the applied plucking and expression styles are retrieved automatically. For this purpose, we adapted several state-of-the-art approaches for onset and offset detection, multipitch estimation, string estimation, feature extraction, and multi-class classification. Furthermore we investigated a robust partial tracking algorithm with respect to inharmonicity, an extensive extraction of novel and known audio features as well as the exploitation of instrument-based knowledge in the form of plausability filtering to obtain more reliable prediction. Our system achieved very high accuracy values of 98 % for onset and offset detection as well as multipitch estimation. For the instrument-related parameters, the proposed algorithm also showed very good performance with accuracy values of 82 % for the string number, 93 % for the plucking style, and 83 % for the expression style. Index Terms playing techniques, plucking style, expression style, multiple fundamental frequency estimation, string classification, fretboard position, fingering, electric guitar, inharmonicity coefficient, tablature",
"title": ""
},
{
"docid": "28d739449d55d77e54571edb3c4ec4ad",
"text": "Immunologic checkpoint blockade with antibodies that target cytotoxic T lymphocyte-associated antigen 4 (CTLA-4) and the programmed cell death protein 1 pathway (PD-1/PD-L1) have demonstrated promise in a variety of malignancies. Ipilimumab (CTLA-4) and pembrolizumab (PD-1) are approved by the US Food and Drug Administration for the treatment of advanced melanoma, and additional regulatory approvals are expected across the oncologic spectrum for a variety of other agents that target these pathways. Treatment with both CTLA-4 and PD-1/PD-L1 blockade is associated with a unique pattern of adverse events called immune-related adverse events, and occasionally, unusual kinetics of tumor response are seen. Combination approaches involving CTLA-4 and PD-1/PD-L1 blockade are being investigated to determine whether they enhance the efficacy of either approach alone. Principles learned during the development of CTLA-4 and PD-1/PD-L1 approaches will likely be used as new immunologic checkpoint blocking antibodies begin clinical investigation.",
"title": ""
},
{
"docid": "afbd0ecad829246ed7d6e1ebcebf5815",
"text": "Battery thermal management system (BTMS) is essential for electric-vehicle (EV) and hybrid-vehicle (HV) battery packs to operate effectively in all climates. Lithium-ion (Li-ion) batteries offer many advantages to the EV such as high power and high specific energy. However, temperature affects their performance, safety, and productive life. This paper is about the design and evaluation of a BTMS based on the Peltier effect heat pumps. The discharge efficiency of a 60-Ah prismatic Li-ion pouch cell was measured under different rates and different ambient temperature values. The obtained results were used to design a solid-state BTMS based on Peltier thermoelectric coolers (TECs). The proposed BTMS is then modeled and evaluated at constant current discharge in the laboratory. In addition, The BTMS was installed in an EV that was driven in the US06 cycle. The thermal response and the energy consumption of the proposed BTMS were satisfactory.",
"title": ""
},
{
"docid": "e2867713be67291ee8c25afa3e2d1319",
"text": "In recent years the <i>l</i><sub>1</sub>, <sub>∞</sub> norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the <i>l</i><sub>1</sub> framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of <i>l</i><sub>1</sub>, <sub>∞</sub> regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the <i>l</i><sub>1</sub>, <sub>∞</sub> ball. We present an algorithm that works in <i>O</i>(<i>n</i> log <i>n</i>) time and <i>O</i>(<i>n</i>) memory where <i>n</i> is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that <i>l</i><sub>1</sub>, <sub>∞</sub> leads to better performance than both <i>l</i><sub>2</sub> and <i>l</i><sub>1</sub> regularization and that it is is effective in discovering jointly sparse solutions.",
"title": ""
},
{
"docid": "41cfa26891e28a76c1d4508ab7b60dfb",
"text": "This paper analyses the digital simulation of a buck converter to emulate the photovoltaic (PV) system with focus on fuzzy logic control of buck converter. A PV emulator is a DC-DC converter (buck converter in the present case) having same electrical characteristics as that of a PV panel. The emulator helps in the real analysis of PV system in an environment where using actual PV systems can produce inconsistent results due to variation in weather conditions. The paper describes the application of fuzzy algorithms to the control of dynamic processes. The complete system is modelled in MATLAB® Simulink SimPowerSystem software package. The results obtained from the simulation studies are presented and the steady state and dynamic stability of the PV emulator system is discussed.",
"title": ""
},
{
"docid": "706b2948b19d15953809d2bdff4c04a3",
"text": "The aim of image enhancement is to produce a processed image which is more suitable than the original image for specific application. Application can be edge detection, boundary detection, image fusion, segmentation etc. In this paper different types of image enhancement algorithms in spatial domain are presented for gray scale as well as for color images. Quantitative analysis like AMBE (Absolute mean brightness error), MSE (Mean square error) and PSNR (Peak signal to noise ratio) for the different algorithms are evaluated. For gray scale image Weighted histogram equalization, Linear contrast stretching (LCS), Non linear contrast stretching logarithmic (NLLCS), Non linear contrast stretching exponential (NLECS), Bi Histogram Equalization (BHE) algorithms are discussed and compared. For color image (RGB) Linear contrast stretching, Non linear contrast stretching logarithmic and Non linear contrast stretching exponential algorithms are discussed. During result analysis, it has been observed that some algorithms does give considerably highly distinct values(MSE or AMBE) for different images. To stabilize these parameters, had proposed the new enhancement scheme Local mean and local standard deviation(LMLS) which will take care of these issues. By experimental analysis It has been observed that proposed method gives better AMBE (should be less) and PSNR (should be high) values compared with other algorithms, also these values are not highly distinct for different images.",
"title": ""
},
{
"docid": "8ad9d98ab60211f96f8076144dad3ad2",
"text": "Although firms have invested significant resources in implementing enterprise software systems (ESS) to modernize and integrate their business process infrastructure, customer satisfaction with ESS has remained an understudied phenomenon. In this exploratory research study, we investigate customer satisfaction for support services of ESS and focus on employee skills and customer heterogeneity. We analyze archival customer satisfaction data from 170 real-world customer service encounters of a leading ESS vendor. Our analysis indicates that the technical and behavioral skills of customer support representatives play a major role in influencing overall customer satisfaction with ESS support services. We find that the effect of technical skills on customer satisfaction is moderated by behavioral skills. We also find that the technical skills of the support personnel are valued more by repeat customers than by new customers. We discuss the implications of these findings for managing customer heterogeneity in ESS support services and for the allocation and training of ESS support personnel. © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "84e8986eff7cb95808de8df9ac286e37",
"text": "The purpose of this thesis is to describe one-shot-learning gesture recognition systems developed on the ChaLearn Gesture Dataset [3]. We use RGB and depth images and combine appearance (Histograms of Oriented Gradients) and motion descriptors (Histogram of Optical Flow) for parallel temporal segmentation and recognition. The Quadratic-Chi distance family is used to measure differences between histograms to capture cross-bin relationships. We also propose a new algorithm for trimming videos — to remove all the unimportant frames from videos. Our two methods both outperform other published methods and help narrow down the gap between human performance and algorithms on this task. The code has been made publicly available in the MLOSS repository.",
"title": ""
},
{
"docid": "1d1be59a2c3d3b11039f9e4b2e8e351c",
"text": "The impact of digital mobility services on individual traffic behavior within cities has increased significantly over the last years. Therefore, the aim of this paper is to provide an overview of existing digital services for urban transportation. Towards this end, we analyze 59 digital mobility services available as smartphone applications or web services. Building on a framework for service system modularization, we identified the services’ modules and data sources. While some service modules and data sources are integrated in various mobility services, others are only used in specific services, even though they would generate value in other services as well. This overview provides the basis for future design science research in the area of digital service systems for sustainable transportation. Based on the overview, practitioners from industry and public administration can identify potential for innovative service and foster co-creation and innovation within existing service systems.",
"title": ""
},
{
"docid": "4ff50e433ba7a5da179c7d8e5e05cb22",
"text": "Social network information is now being used in ways for which it may have not been originally intended. In particular, increased use of smartphones capable ofrunning applications which access social network information enable applications to be aware of a user's location and preferences. However, current models forexchange of this information require users to compromise their privacy and security. We present several of these privacy and security issues, along withour design and implementation of solutions for these issues. Our work allows location-based services to query local mobile devices for users' social network information, without disclosing user identity or compromising users' privacy and security. We contend that it is important that such solutions be acceptedas mobile social networks continue to grow exponentially.",
"title": ""
},
{
"docid": "824fff3f4aea6a4b4d87c7b1ec5e3e75",
"text": "This paper presents the behaviour of a hybrid system based on renewable sources (wind and solar) with their stochastic behaviour and a pre-programmed timed load profile. The goal is the analysis of a small hybrid system in the context of new services that can be offered to the grids as a power generator. Both the sources have continuous maximum power extraction where the Maximum Power Point Tracking (MPPT) of the wind generator is based on its DC power output. The structure of the system is presented and the local and global control is developed. Simulation and conclusions about the behaviour of the system are presented.",
"title": ""
},
{
"docid": "c3f6e26eb8cccde1b462e2ab6bb199c3",
"text": "Scale-out distributed storage systems have recently gained high attentions with the emergence of big data and cloud computing technologies. However, these storage systems sometimes suffer from performance degradation, especially when the communication subsystem is not fully optimized. The problem becomes worse as the network bandwidth and its corresponding traffic increase. In this paper, we first conduct an extensive analysis of communication subsystem in Ceph, an object-based scale-out distributed storage system. Ceph uses asynchronous messenger framework for inter-component communication in the storage cluster. Then, we propose three major optimizations to improve the performance of Ceph messenger. These include i) deploying load balancing algorithm among worker threads based on the amount of workloads, ii) assigning multiple worker threads (we call dual worker) per single connection to maximize the overlapping activity among threads, and iii) using multiple connections between storage servers to maximize bandwidth usage, and thus reduce replication overhead. The experimental results show that the optimized Ceph messenger outperforms the original messenger implementation up to 40% in random writes with 4K messages. Moreover, Ceph with optimized communication subsystem shows up to 13% performance improvement as compared to original Ceph.",
"title": ""
},
{
"docid": "5f6f0bd98fa03e4434fabe18642a48bc",
"text": "Previous research suggests that women's genital arousal is an automatic response to sexual stimuli, whereas men's genital arousal is dependent upon stimulus features specific to their sexual interests. In this study, we tested the hypothesis that a nonhuman sexual stimulus would elicit a genital response in women but not in men. Eighteen heterosexual women and 18 heterosexual men viewed seven sexual film stimuli, six human films and one nonhuman primate film, while measurements of genital and subjective sexual arousal were recorded. Women showed small increases in genital arousal to the nonhuman stimulus and large increases in genital arousal to both human male and female stimuli. Men did not show any genital arousal to the nonhuman stimulus and demonstrated a category-specific pattern of arousal to the human stimuli that corresponded to their stated sexual orientation. These results suggest that stimulus features necessary to evoke genital arousal are much less specific in women than in men.",
"title": ""
}
] |
scidocsrr
|
586459da6e205f11edcb99d362667bdb
|
Parent-mediated communication-focused treatment in children with autism (PACT): a randomised controlled trial
|
[
{
"docid": "84f9a6913a7689a5bbeb04f3173237b2",
"text": "BACKGROUND\nPsychosocial treatments are the mainstay of management of autism in the UK but there is a notable lack of a systematic evidence base for their effectiveness. Randomised controlled trial (RCT) studies in this area have been rare but are essential because of the developmental heterogeneity of the disorder. We aimed to test a new theoretically based social communication intervention targeting parental communication in a randomised design against routine care alone.\n\n\nMETHODS\nThe intervention was given in addition to existing care and involved regular monthly therapist contact for 6 months with a further 6 months of 2-monthly consolidation sessions. It aimed to educate parents and train them in adapted communication tailored to their child's individual competencies. Twenty-eight children with autism were randomised between this treatment and routine care alone, stratified for age and baseline severity. Outcome was measured at 12 months from commencement of intervention, using standardised instruments.\n\n\nRESULTS\nAll cases studied met full Autism Diagnostic Interview (ADI) criteria for classical autism. Treatment and controls had similar routine care during the study period and there were no study dropouts after treatment had started. The active treatment group showed significant improvement compared with controls on the primary outcome measure--Autism Diagnostic Observation Schedule (ADOS) total score, particularly in reciprocal social interaction--and on secondary measures of expressive language, communicative initiation and parent-child interaction. Suggestive but non-significant results were found in Vineland Adaptive Behaviour Scales (Communication Sub-domain) and ADOS stereotyped and restricted behaviour domain.\n\n\nCONCLUSIONS\nA Randomised Treatment Trial design of this kind in classical autism is feasible and acceptable to patients. This pilot study suggests significant additional treatment benefits following a targeted (but relatively non-intensive) dyadic social communication treatment, when compared with routine care. The study needs replication on larger and independent samples. It should encourage further RCT designs in this area.",
"title": ""
}
] |
[
{
"docid": "8fccceb2757decb670eed84f4b2405a1",
"text": "This paper develops and evaluates search and optimization techniques for autotuning 3D stencil (nearest neighbor) computations on GPUs. Observations indicate that parameter tuning is necessary for heterogeneous GPUs to achieve optimal performance with respect to a search space. Our proposed framework takes a most concise specification of stencil behavior from the user as a single formula, autogenerates tunable code from it, systematically searches for the best configuration and generates the code with optimal parameter configurations for different GPUs. This autotuning approach guarantees adaptive performance for different generations of GPUs while greatly enhancing programmer productivity. Experimental results show that the delivered floating point performance is very close to previous handcrafted work and outperforms other autotuned stencil codes by a large margin. Furthermore, heterogeneous GPU clusters are shown to exhibit the highest performance for dissimilar tuning parameters leveraging proportional partitioning relative to single-GPU performance.",
"title": ""
},
{
"docid": "f3c44f35a2942b3a2b52c0ad72b55aff",
"text": "An overview of Polish and foreign literature concerning the chemical composition of edible mushrooms both cultivated and harvested in natural sites in Poland and abroad is presented. 100 g of fresh mushrooms contains 5.3-14.8 g dry matter, 1.5-6.7 g of carbohydrates, 1.5-3.0 g of protein and 0.3-0.4 g of fat. Mushrooms are a high valued source of mineral constituents, particularly potassium, phosphorus and magnesium and of vitamins of the B group, chiefly vitamins B2 and B3 and also vitamin D. The aroma of the discussed raw materials is based on about 150 aromatic compounds. The mushrooms can be a source of heavy metals and radioactive substances. They are also characterized by the occurrence of numerous enzymes.",
"title": ""
},
{
"docid": "e9b2f987c4744e509b27cbc2ab1487be",
"text": "Analogy and similarity are often assumed to be distinct psychological processes. In contrast to this position, the authors suggest that both similarity and analogy involve a process of structural alignment and mapping, that is, that similarity is like analogy. In this article, the authors first describe the structure-mapping process as it has been worked out for analogy. Then, this view is extended to similarity, where it is used to generate new predictions. Finally, the authors explore broader implications of structural alignment for psychological processing.",
"title": ""
},
{
"docid": "e6ca00d92f6e54ec66943499fba77005",
"text": "This paper covers aspects of governing information data on enterprise level using IBM solutions. In particular it focus on one of the key elements of governance — data lineage for EU GDPR regulations.",
"title": ""
},
{
"docid": "5b3ca1cc607d2e8f0394371f30d9e83a",
"text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.",
"title": ""
},
{
"docid": "ecce348941aeda57bd66dbd7836923e6",
"text": "Moana (2016) continues a tradition of Disney princess movies that perpetuate gender stereotypes. The movie contains the usual Electral undercurrent, with Moana seeking to prove her independence to her overprotective father. Moana’s partner in her adventures, Maui, is overtly hypermasculine, a trait epitomized by a phallic fishhook that is critical to his identity. Maui’s struggles with shapeshifting also reflect male anxieties about performing masculinity. Maui violates the Mother Island, first by entering her cave and then by using his fishhook to rob her of her fertility. The repercussions of this act are the basis of the plot: the Mother Island abandons her form as a nurturing, youthful female (Te Fiti) focused on creation to become a vengeful lava monster (Te Kā). At the end, Moana successfully urges Te Kā to get in touch with her true self, a brave but simple act that is sufficient to bring back Te Fiti, a passive, smiling green goddess. The association of youthful, fertile females with good and witch-like infertile females with evil implies that women’s worth and well-being are dependent upon their procreative function. Stereotypical gender tropes that also include female abuse of power and a narrow conception of masculinity merit analysis in order to further progress in recognizing and addressing patterns of gender hegemony in popular Disney films.",
"title": ""
},
{
"docid": "20fbb79c467e70dccf28f438e3c99efb",
"text": "Surface water is a source of drinking water in most rural communities in Nigeria. This study evaluated the total heterotrophic bacteria (THB) counts and some physico-chemical characteristics of Rivers surrounding Wilberforce Island, Nigeria.Samples were collected in July 2007 and analyzed using standard procedures. The result of the THB ranged from 6.389 – 6.434Log cfu/ml. The physico-chemical parameters results ranged from 6.525 – 7.105 (pH), 56.075 – 64.950μS/cm (Conductivity), 0.010 – 0.050‰ (Salinity), 103.752 – 117.252 NTU (Turbidity), 27.250 – 27.325 oC (Temperature), 10.200 – 14.225 mg/l (Dissolved oxygen), 28.180 – 32.550 mg/l (Total dissolved solid), 0.330 – 0.813 mg/l (Nitrate), 0.378 – 0.530 mg/l (Ammonium). Analysis of variance showed that there were significant variation (P<0.05) in the physicochemical properties except for Salinity and temperature between the two rivers. Also no significant different (P>0.05) exist in the THB density of both rivers; upstream (Agudama-Ekpetiama) and downstream (Akaibiri) of River Nun with regard to ammonium and nitrate. Significant positive correlation (P<0.01) exist between dissolved oxygen with ammonium, Conductivity with salinity and total dissolved solid, salinity with total dissolved solid, turbidity with nitrate, and pH with nitrate. The positive correlation (P<0.05) also exist between pH with turbidity. High turbidity and bacteria density in the water samples is an indication of pollution and contamination respectively. Hence, the consumption of these surface water without treatment could cause health related effects. Keyword: Drinking water sources, microorganisms, physico-chemistry, surface water, Wilberforce Island",
"title": ""
},
{
"docid": "f071a3d699ba4b3452043b6efb14b508",
"text": "BACKGROUND\nThe medical subdomain of a clinical note, such as cardiology or neurology, is useful content-derived metadata for developing machine learning downstream applications. To classify the medical subdomain of a note accurately, we have constructed a machine learning-based natural language processing (NLP) pipeline and developed medical subdomain classifiers based on the content of the note.\n\n\nMETHODS\nWe constructed the pipeline using the clinical NLP system, clinical Text Analysis and Knowledge Extraction System (cTAKES), the Unified Medical Language System (UMLS) Metathesaurus, Semantic Network, and learning algorithms to extract features from two datasets - clinical notes from Integrating Data for Analysis, Anonymization, and Sharing (iDASH) data repository (n = 431) and Massachusetts General Hospital (MGH) (n = 91,237), and built medical subdomain classifiers with different combinations of data representation methods and supervised learning algorithms. We evaluated the performance of classifiers and their portability across the two datasets.\n\n\nRESULTS\nThe convolutional recurrent neural network with neural word embeddings trained-medical subdomain classifier yielded the best performance measurement on iDASH and MGH datasets with area under receiver operating characteristic curve (AUC) of 0.975 and 0.991, and F1 scores of 0.845 and 0.870, respectively. Considering better clinical interpretability, linear support vector machine-trained medical subdomain classifier using hybrid bag-of-words and clinically relevant UMLS concepts as the feature representation, with term frequency-inverse document frequency (tf-idf)-weighting, outperformed other shallow learning classifiers on iDASH and MGH datasets with AUC of 0.957 and 0.964, and F1 scores of 0.932 and 0.934 respectively. We trained classifiers on one dataset, applied to the other dataset and yielded the threshold of F1 score of 0.7 in classifiers for half of the medical subdomains we studied.\n\n\nCONCLUSION\nOur study shows that a supervised learning-based NLP approach is useful to develop medical subdomain classifiers. The deep learning algorithm with distributed word representation yields better performance yet shallow learning algorithms with the word and concept representation achieves comparable performance with better clinical interpretability. Portable classifiers may also be used across datasets from different institutions.",
"title": ""
},
{
"docid": "2bb535ff25532ccdbf85a301a872c8bd",
"text": "Simultaneous Localization and Mapping (SLAM) consists in the concurrent construction of a representation of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. The paper serves as a tutorial for the non-expert reader. It is also a position paper: by looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: do robots need SLAM? Is SLAM solved?",
"title": ""
},
{
"docid": "23f5277dc02907d2ba5dd33c607c6c88",
"text": "Despite the abundance of localization applications, the tracking devices have never been truly realized in Etextiles. Standard printed circuit board (PCB)-based devices are obtrusive and rigid and hence not suitable for textile based implementations. An attractive option would be direct printing of circuit layout on the textile itself, negating the use of rigid PCB materials. However, high surface roughness and porosity of textiles prevents efficient and reliable printing of electronics on textile. In this work, by printing an interface layer on the textile first, a complete localization circuit integrated with an antenna has been inkjet-printed on the textile for the first time. Printed conductive traces were optimized in terms of conductivity and resolution by controlling the number of over-printed layers. The tracking device determines the wearer's position using WiFi and this information can be displayed on any internet-enabled device, such as smart phone. The device is compact (55 mm×45 mm) and lightweight (22 g with 500 mAh battery) for people to comfortably wear it and can be easily concealed in case discretion is required. The device operates at 2.4 GHz communicated up to a distance of 55 m, with localization accuracy of up to 8 m.",
"title": ""
},
{
"docid": "e8e3f77626742ef7aa40703e3113f148",
"text": "This paper presents a multi-agent based framework for target tracking. We exploit the agent-oriented software paradigm with its characteristics that provide intelligent autonomous behavior together with a real time computer vision system to achieve high performance real time target tracking. The framework consists of four layers; interface, strategic, management, and operation layers. Interface layer receives from the user the tracking parameters such as the number and type of trackers and targets and type of the tracking environment, and then delivers these parameters to the subsequent layers. Strategic (decision making) layer is provided with a knowledge base of target tracking methodologies that are previously implemented by researchers in diverse target tracking applications and are proven successful. And by inference in the knowledge base using the user input a tracking methodology is chosen. Management layer is responsible for pursuing and controlling the tracking methodology execution. Operation layer represents the phases in the tracking methodology and is responsible for communicating with the real-time computer vision system to execute the algorithms in the phases. The framework is presented with a case study to show its ability to tackle the target tracking problem and its flexibility to solve the problem with different tracking parameters. This paper describes the ability of the agent-based framework to deploy any real-time vision system that fits in solving the target tracking problem. It is a step towards a complete open standard, real-time, agent-based framework for target tracking.",
"title": ""
},
{
"docid": "f6a19d26df9acabe9185c4c167520422",
"text": "OBJECTIVE Benign enlargement of the subarachnoid spaces (BESS) is a common finding on imaging studies indicated by macrocephaly in infancy. This finding has been associated with the presence of subdural fluid collections that are sometimes construed as suggestive of abusive head injury. The prevalence of BESS among infants with macrocephaly and the prevalence of subdural collections among infants with BESS are both poorly defined. The goal of this study was to determine the relative frequencies of BESS, hydrocephalus, and subdural collections in a large consecutive series of imaging studies performed for macrocephaly and to determine the prevalence of subdural fluid collections among patients with BESS. METHODS A text search of radiology requisitions identified studies performed for macrocephaly in patients ≤ 2 years of age. Studies of patients with hydrocephalus or acute trauma were excluded. Studies that demonstrated hydrocephalus or chronic subdural hematoma not previously recognized but responsible for macrocephaly were noted but not investigated further. The remaining studies were reviewed for the presence of incidental subdural collections and for measurement of the depth of the subarachnoid space. A 3-point scale was used to grade BESS: Grade 0, < 5 mm; Grade 1, 5-9 mm; and Grade 2, ≥ 10 mm. RESULTS After exclusions, there were 538 studies, including 7 cases of hydrocephalus (1.3%) and 1 large, bilateral chronic subdural hematoma (0.2%). There were incidental subdural collections in 21 cases (3.9%). Two hundred sixty-five studies (49.2%) exhibited Grade 1 BESS, and 46 studies (8.6%) exhibited Grade 2 BESS. The prevalence of incidental subdural collections among studies with BESS was 18 of 311 (5.8%). The presence of BESS was associated with a greater prevalence of subdural collections, and higher grades of BESS were associated with increasing prevalence of subdural collections. After controlling for imaging modality, the odds ratio of the association of BESS with subdural collections was 3.68 (95% CI 1.12-12.1, p = 0.0115). There was no association of race, sex, or insurance status with subdural collections. Patients with BESS had larger head circumference Z-scores, but there was no association of head circumference or age with subdural collections. Interrater reliability in the diagnosis and grading of BESS was only fair. CONCLUSIONS The current study confirms the association of BESS with incidental subdural collections and suggests that greater depth of the subarachnoid space is associated with increased prevalence of such collections. These observations support the theory that infants with BESS have a predisposition to subdural collections on an anatomical basis. Incidental subdural collections in the setting of BESS are not necessarily indicative of abusive head injury.",
"title": ""
},
{
"docid": "1f752034b5307c0118d4156d0b95eab3",
"text": "Importance\nTherapy-related myeloid neoplasms are a potentially life-threatening consequence of treatment for autoimmune disease (AID) and an emerging clinical phenomenon.\n\n\nObjective\nTo query the association of cytotoxic, anti-inflammatory, and immunomodulating agents to treat patients with AID with the risk for developing myeloid neoplasm.\n\n\nDesign, Setting, and Participants\nThis retrospective case-control study and medical record review included 40 011 patients with an International Classification of Diseases, Ninth Revision, coded diagnosis of primary AID who were seen at 2 centers from January 1, 2004, to December 31, 2014; of these, 311 patients had a concomitant coded diagnosis of myelodysplastic syndrome (MDS) or acute myeloid leukemia (AML). Eighty-six cases met strict inclusion criteria. A case-control match was performed at a 2:1 ratio.\n\n\nMain Outcomes and Measures\nOdds ratio (OR) assessment for AID-directed therapies.\n\n\nResults\nAmong the 86 patients who met inclusion criteria (49 men [57%]; 37 women [43%]; mean [SD] age, 72.3 [15.6] years), 55 (64.0%) had MDS, 21 (24.4%) had de novo AML, and 10 (11.6%) had AML and a history of MDS. Rheumatoid arthritis (23 [26.7%]), psoriasis (18 [20.9%]), and systemic lupus erythematosus (12 [14.0%]) were the most common autoimmune profiles. Median time from onset of AID to diagnosis of myeloid neoplasm was 8 (interquartile range, 4-15) years. A total of 57 of 86 cases (66.3%) received a cytotoxic or an immunomodulating agent. In the comparison group of 172 controls (98 men [57.0%]; 74 women [43.0%]; mean [SD] age, 72.7 [13.8] years), 105 (61.0%) received either agent (P = .50). Azathioprine sodium use was observed more frequently in cases (odds ratio [OR], 7.05; 95% CI, 2.35- 21.13; P < .001). Notable but insignificant case cohort use among cytotoxic agents was found for exposure to cyclophosphamide (OR, 3.58; 95% CI, 0.91-14.11) followed by mitoxantrone hydrochloride (OR, 2.73; 95% CI, 0.23-33.0). Methotrexate sodium (OR, 0.60; 95% CI, 0.29-1.22), mercaptopurine (OR, 0.62; 95% CI, 0.15-2.53), and mycophenolate mofetil hydrochloride (OR, 0.66; 95% CI, 0.21-2.03) had favorable ORs that were not statistically significant. No significant association between a specific length of time of exposure to an agent and the drug's category was observed.\n\n\nConclusions and Relevance\nIn a large population with primary AID, azathioprine exposure was associated with a 7-fold risk for myeloid neoplasm. The control and case cohorts had similar systemic exposures by agent category. No association was found for anti-tumor necrosis factor agents. Finally, no timeline was found for the association of drug exposure with the incidence in development of myeloid neoplasm.",
"title": ""
},
{
"docid": "3bff3136e5e2823d0cca2f864fe9e512",
"text": "Cloud computing provides variety of services with the growth of their offerings. Due to efficient services, it faces numerous challenges. It is based on virtualization, which provides users a plethora computing resources by internet without managing any infrastructure of Virtual Machine (VM). With network virtualization, Virtual Machine Manager (VMM) gives isolation among different VMs. But, sometimes the levels of abstraction involved in virtualization have been reducing the workload performance which is also a concern when implementing virtualization to the Cloud computing domain. In this paper, it has been explored how the vendors in cloud environment are using Containers for hosting their applications and also the performance of VM deployments. It also compares VM and Linux Containers with respect to the quality of service, network performance and security evaluation.",
"title": ""
},
{
"docid": "de4677a8bb9d1e43a4b6fe4f2e6b6106",
"text": "Reinforcement learning (RL) has developed into a large research field. The current state-ofthe-art is comprised of several subfields dealing with, for example, hierarchical abstraction and relational representations. This overview is targeted at researchers interested in RL who want to know where to start when studying RL in general, and where to start within the field of RL when faced with specific problem domains. This overview is by no means complete, nor does it describe all relevant texts. In fact, there are many more. The main function of this overview is to provide a reasonable amount of good entry points into the rich field of RL. All texts are widely available and most of them are online. General and Introductory Texts There are many texts that introduce the exciting field of RL and Markov decision processes (see for example the mentioned PhD theses at the end of this overview). Furthermore, many recent AI and machine learning textbooks cover basic RL. Some of the core texts in the field are the following. I M. L. Puterman. Markov Decision Processes—Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, 1994 I D. P. Bertsekas and J. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, MA, 1996 I L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237–285, 1996 I S. S. Keerthi and B. Ravindran. Reinforcement learning. In E. Fiesler and R. Beale, editors, Handbook of Neural Computation, chapter C3. Institute of Physics and Oxford University Press, New York, New York, 1997 I R. S. Sutton and A. G. Barto. Reinforcement Learning: an Introduction. The MIT Press, Cambridge, 1998 I C. Boutilier, T. Dean, and S. Hanks. Decision theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research, 11:1–94, 1999 I M. van Otterlo. The Logic of Adaptive Behavior: Knowledge Representation and Algorithms for Adaptive Sequential Decision Making under Uncertainty in First-Order and Relational Domains. IOS Press, Amsterdam, The Netherlands, 2009 The book by Sutton and Barto is available online, for free. You can find it at http://www.cs.ualberta.ca/∼ sutton/book/the-book.html Function Approximation, Generalization and Abstraction Because most problems are too large to represent explicitly, the majority of techniques in current RL research employs some form of generalization, abstraction or function approximation. Ergo, there are innumerable texts that deal with these matters. Some interesting starting points are the following.",
"title": ""
},
{
"docid": "76e7f63fa41d6d457e6e4386ad7b9896",
"text": "A growing body of work has highlighted the challenges of identifying the stance that a speaker holds towards a particular topic, a task that involves identifying a holistic subjective disposition. We examine stance classification on a corpus of 4873 posts from the debate website ConvinceMe.net, for 14 topics ranging from the playful to the ideological. We show that ideological debates feature a greater share of rebuttal posts, and that rebuttal posts are significantly harder to classify for stance, for both humans and trained classifiers. We also demonstrate that the number of subjective expressions varies across debates, a fact correlated with the performance of systems sensitive to sentiment-bearing terms. We present results for classifying stance on a per topic basis that range from 60% to 75%, as compared to unigram baselines that vary between 47% and 66%. Our results suggest that features and methods that take into account the dialogic context of such posts improve accuracy.",
"title": ""
},
{
"docid": "9a92f79365bb31133b131946ecb56824",
"text": "Recent color transfer methods use local information to learn the transformation from a source to an exemplar image, and then transfer this appearance change to a target image. These solutions achieve very successful results for general mood changes, e.g., changing the appearance of an image from “sunny” to “overcast”. However, such methods have a hard time creating new image content, such as leaves on a bare tree. Texture transfer, on the other hand, can synthesize such content but tends to destroy image structure. We propose the first algorithm that unifies color and texture transfer, outperforming both by leveraging their respective strengths. A key novelty in our approach resides in teasing apart appearance changes that can be modeled simply as changes in color versus those that require new image content to be generated. Our method starts with an analysis phase which evaluates the success of color transfer by comparing the exemplar with the source. This analysis then drives a selective, iterative texture transfer algorithm that simultaneously predicts the success of color transfer on the target and synthesizes new content where needed. We demonstrate our unified algorithm by transferring large temporal changes between photographs, such as change of season – e.g., leaves on bare trees or piles of snow on a street – and flooding.",
"title": ""
},
{
"docid": "e97201f22acbd963cdffb29f95718f92",
"text": "Nowadays basic algorithms such as Apriori and Eclat often are conceived as mere textbook examples without much practical applicability: in practice more sophisticated algorithms with better performance have to be used. We would like to challenge that point of view by showing that a carefully assembled implementation of Eclat outperforms the best algorithms known in the field, at least for dense datasets. For that we view Eclat as a basic algorithm and a bundle of optional algorithmic features that are taken partly from other algorithms like lcm and Apriori, partly new ones. We evaluate the performance impact of these different features and report about results of experiments that support our claim of the competitiveness of Eclat.",
"title": ""
},
{
"docid": "af7d318e1c203358c87592d0c6bcb4d2",
"text": "A fundamental component of spatial modulation (SM), termed generalized space shift keying (GSSK), is presented. GSSK modulation inherently exploits fading in wireless communication to provide better performance over conventional amplitude/phase modulation (APM) techniques. In GSSK, only the antenna indices, and not the symbols themselves (as in the case of SM and APM), relay information. We exploit GSSKpsilas degrees of freedom to achieve better performance, which is done by formulating its constellation in an optimal manner. To support our results, we also derive upper bounds on GSSKpsilas bit error probability, where the source of GSSKpsilas strength is made clear. Analytical and simulation results show performance gains (1.5-3 dB) over popular multiple antenna APM systems (including Bell Laboratories layered space time (BLAST) and maximum ratio combining (MRC) schemes), making GSSK an excellent candidate for future wireless applications.",
"title": ""
},
{
"docid": "246f904f115070089776a77db240e41d",
"text": "Children with better-developed motor skills may find it easier to be active and engage in more physical activity (PA) than those with less-developed motor skills. The purpose of this study was to examine the relationship between motor skill performance and PA in preschool children. Participants were 80 three- and 118 four-year-old children. The Children's Activity and Movement in Preschool Study (CHAMPS) Motor Skill Protocol was used to assess process characteristics of six locomotor and six object control skills; scores were categorized as locomotor, object control, and total. The actigraph accelerometer was used to measure PA; data were expressed as percent of time spent in sedentary, light, moderate-to-vigorous PA (MVPA), and vigorous PA (VPA). Children in the highest tertile for total score spent significantly more time in MVPA (13.4% vs. 12.8% vs. 11.4%) and VPA (5% vs. 4.6% vs. 3.8%) than children in middle and lowest tertiles. Children in the highest tertile of locomotor scores spent significantly less time in sedentary activity than children in other tertiles and significantly more time in MVPA (13.4% vs. 11.6%) and VPA (4.9% vs. 3.8%) than children in the lowest tertile. There were no differences among tertiles for object control scores. Children with poorer motor skill performance were less active than children with better-developed motor skills. This relationship between motor skill performance and PA could be important to the health of children, particularly in obesity prevention. Clinicians should work with parents to monitor motor skills and to encourage children to engage in activities that promote motor skill performance.",
"title": ""
}
] |
scidocsrr
|
74b4282ea94716a805567aa7f44c6e69
|
net Wireless Fetal Monitoring
|
[
{
"docid": "0da78253d26ddba2b17dd76c4b4c697a",
"text": "In this work, a portable real-time wireless health monitoring system is developed. The system is used for remote monitoring of patients' heart rate and oxygen saturation in blood. The system was designed and implemented using ZigBee wireless technologies. All pulse oximetry data are transferred within a group of wireless personal area network (WPAN) to database computer server. The sensor modules were designed for low power operation with a program that can adjust power management depending on scenarios of power source and current power operation. The sensor unit consists of (1) two types of LEDs and photodiode packed in Velcro strip that is facing to a patient's fingertip; (2) Microcontroller unit for interfacing with ZigBee module, processing pulse oximetry data and storing some data before sending to base PC; (3) ZigBee module for communicating the data of pulse oximetry, ZigBee module gets all commands from microcontroller unit and it has a complete ZigBee stack inside and (4) Base node for receiving and storing the data before sending to PC.",
"title": ""
}
] |
[
{
"docid": "ef7b6c2b0254535e9dbf85a4af596080",
"text": "African swine fever virus (ASFV) is a highly virulent swine pathogen that has spread across Eastern Europe since 2007 and for which there is no effective vaccine or treatment available. The dynamics of shedding and excretion is not well known for this currently circulating ASFV strain. Therefore, susceptible pigs were exposed to pigs intramuscularly infected with the Georgia 2007/1 ASFV strain to measure those dynamics through within- and between-pen transmission scenarios. Blood, oral, nasal and rectal fluid samples were tested for the presence of ASFV by virus titration (VT) and quantitative real-time polymerase chain reaction (qPCR). Serum was tested for the presence of ASFV-specific antibodies. Both intramuscular inoculation and contact transmission resulted in development of acute disease in all pigs although the experiments indicated that the pathogenesis of the disease might be different, depending on the route of infection. Infectious ASFV was first isolated in blood among the inoculated pigs by day 3, and then chronologically among the direct and indirect contact pigs, by day 10 and 13, respectively. Close to the onset of clinical signs, higher ASFV titres were found in blood compared with nasal and rectal fluid samples among all pigs. No infectious ASFV was isolated in oral fluid samples although ASFV genome copies were detected. Only one animal developed antibodies starting after 12 days post-inoculation. The results provide quantitative data on shedding and excretion of the Georgia 2007/1 ASFV strain among domestic pigs and suggest a limited potential of this isolate to cause persistent infection.",
"title": ""
},
{
"docid": "66c9a05d8ff109696f5c09a70c5f11fc",
"text": "How do informal institutions influence the formation and function of formal institutions? Existing typologies focus on the interaction of informal institutions with an established framework of formal rules that is taken for granted. In transitional settings, such typologies are less helpful, since many formal institutions are in a state of flux. Instead, using examples drawn from postcommunist state development, I argue that informal institutions can replace, undermine, and reinforce formal institutions irrespective of the latter’s strength, and that the elite competition generated by informal rules further influences which of these interactions dominate the development of the institutional framework. In transitional settings, the emergence and effectiveness of many formal institutions is endogenous to the informal institutions themselves.",
"title": ""
},
{
"docid": "651e1c0385dd55e04bb2fe90f0e6dd24",
"text": "Pollution has been recognized as the major threat to sustainability of river in Malaysia. Some of the limitations of existing methods for river monitoring are cost of deployment, non-real-time monitoring, and low resolution both in time and space. To overcome these limitations, a smart river monitoring solution is proposed for river water quality in Malaysia. The proposed method incorporates unmanned aerial vehicle (UAV), internet of things (IoT), low power wide area (LPWA) and data analytic (DA). A setup of the proposed method and preliminary results are presented. The proposed method is expected to deliver an efficient and real-time solution for river monitoring in Malaysia.",
"title": ""
},
{
"docid": "61b6cf4bc86ae9a817f6e809fdf59ad2",
"text": "In the last few years, phishing scams have rapidly grown posing huge threat to global Internet security. Today, phishing attack is one of the most common and serious threats over Internet where cyber attackers try to steal user’s personal or financial credentials by using either malwares or social engineering. Detection of phishing attacks with high accuracy has always been an issue of great interest. Recent developments in phishing detection techniques have led to various new techniques, specially designed for phishing detection where accuracy is extremely important. Phishing problem is widely present as there are several ways to carry out such an attack, which implies that one solution is not adequate to address it. Two main issues are addressed in our paper. First, we discuss in detail phishing attacks, history of phishing attacks and motivation of attacker behind performing this attack. In addition, we also provide taxonomy of various types of phishing attacks. Second, we provide taxonomy of various solutions proposed in the literature to detect and defend from phishing attacks. In addition, we also discuss various issues and challenges faced in dealing with phishing attacks and spear phishing and how phishing is now targeting the emerging domain of IoT. We discuss various tools and datasets that are used by the researchers for the evaluation of their approaches. This provides better understanding of the problem, current solution space and future research scope to efficiently deal with such attacks.",
"title": ""
},
{
"docid": "936048690fb043434c3ee0060c5bf7a5",
"text": "This paper asks whether case-based reasoning is an artificial intelligence (AI) technology like rule-based reasoning, neural networks or genetic algorithms or whether it is better described as a methodology for problem solving, that may use any appropriate technology. By describing four applications of case-based reasoning (CBR), that variously use: nearest neighbour, induction, fuzzy logic and SQL, the author shows that CBR is a methodology and not a technology. The implications of this are discussed. q 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "e87c67ffe98bf90ada3002fe87a9bbdd",
"text": "Visually analyzing citation networks poses challenges to many fields of the data mining research. How can we summarize a large citation graph according to the user's interest? In particular, how can we illustrate the impact of a highly influential paper through the summarization? Can we maintain the sensory node-link graph structure while revealing the flow-based influence patterns and preserving a fine readability? The state-of-the-art influence maximization algorithms can detect the most influential node in a citation network, but fail to summarize a graph structure to account for its influence. On the other hand, existing graph summarization methods fold large graphs into clustered views, but can not reveal the hidden influence patterns underneath the citation network. In this paper, we first formally define the Influence Graph Summarization problem on citation networks. Second, we propose a matrix decomposition based algorithm pipeline to solve the IGS problem. Our method can not only highlight the flow-based influence patterns, but also easily extend to support the rich attribute information. A prototype system called VEGAS implementing this pipeline is also developed. Third, we present a theoretical analysis on our main algorithm, which is equivalent to the kernel k-mean clustering. It can be proved that the matrix decomposition based algorithm can approximate the objective of the proposed IGS problem. Last, we conduct comprehensive experiments with real-world citation networks to compare the proposed algorithm with classical graph summarization methods. Evaluation results demonstrate that our method significantly outperforms the previous ones in optimizing both the quantitative IGS objective and the quality of the visual summarizations.",
"title": ""
},
{
"docid": "0b872b1d13c9a96c52046b41272e3a5f",
"text": "This dissertation describes experiments conducted to evaluate an algorithm that attempts to automatically recognise emotions (affect) in written language. Examples from several areas of research that can inform affect recognition experiments are reviewed, including sentiment analysis, subjectivity analysis, and the psychology of emotion. An affect annotation exercise was carried out in order to build a suitable set of test data for the experiment. An algorithm to classify according to the emotional content of sentences was derived from an existing technique for sentiment analysis. When compared against the manual annotations, the algorithm achieved an accuracy of 32.78%. Several factors indicate that the method is making slightly informed choices, and could be useful as part of a holistic approach to recognising the affect represented in text. iii Acknowledgements",
"title": ""
},
{
"docid": "5d673f5297919e6307dc2861d10ddfe6",
"text": "Given the increased testing of school-aged children in the United States there is a need for a current and valid scale to measure the effects of test anxiety in children. The domain of children’s test anxiety was theorized to be comprised of three dimensions: thoughts, autonomic reactions, and off-task behaviors. Four stages are described in the evolution of the Children’s Test Anxiety Scale (CTAS): planning, construction, quantitative evaluation, and validation. A 50-item scale was administered to a development sample (N /230) of children in grades 3 /6 to obtain item analysis and reliability estimates which resulted in a refined 30-item scale. The reduced scale was administered to a validation sample (N /261) to obtain construct validity evidence. A three-factor structure fit the data reasonably well. Recommendations for future research with the scale are described.",
"title": ""
},
{
"docid": "32977df591e90db67bf09b0412f56d7b",
"text": "In an electronic warfare (EW) battlefield environment, it is highly necessary for a fighter aircraft to intercept and identify the several interleaved radar signals that it receives from the surrounding emitters, so as to prepare itself for countermeasures. The main function of the Electronic Support Measure (ESM) receiver is to receive, measure, deinterleave pulses and then identify alternative threat emitters. Deinterleaving of radar signals is based on time of arrival (TOA) analysis and the use of the sequential difference (SDIF) histogram method for determining the pulse repetition interval (PRI), which is an important pulse parameter. Once the pulse repetition intervals are determined, check for the existence of staggered PRI (level-2) is carried out, implemented in MATLAB. Keywordspulse deinterleaving, pulse repetition interval, stagger PRI, sequential difference histogram, time of arrival.",
"title": ""
},
{
"docid": "9d5e1ec9444b1113c79c3740f9f773cf",
"text": "Intuitionistic Fuzzy Sets (IFS) are a generalization of fuzzy sets where the membership is an interval. That is, membership, instead of being a single value, is an interval. A large number of operations have been defined for this type of fuzzy sets, and several applications have been developed in the last years. In this paper we describe hesitant fuzzy sets. They are another generalization of fuzzy sets. Although similar in intention to IFS, some basic differences on their interpretation and on their operators exist. In this paper we review their definition, the main results and we present an extension principle, which permits to generalize existing operations on fuzzy sets to this new type of fuzzy sets. We also discuss their use in decision making.",
"title": ""
},
{
"docid": "57602f5e2f64514926ab96551f2b4fb6",
"text": "Landscape genetics has seen rapid growth in number of publications since the term was coined in 2003. An extensive literature search from 1998 to 2008 using keywords associated with landscape genetics yielded 655 articles encompassing a vast array of study organisms, study designs and methodology. These publications were screened to identify 174 studies that explicitly incorporated at least one landscape variable with genetic data. We systematically reviewed this set of papers to assess taxonomic and temporal trends in: (i) geographic regions studied; (ii) types of questions addressed; (iii) molecular markers used; (iv) statistical analyses used; and (v) types and nature of spatial data used. Overall, studies have occurred in geographic regions proximal to developed countries and more commonly in terrestrial vs. aquatic habitats. Questions most often focused on effects of barriers and/or landscape variables on gene flow. The most commonly used molecular markers were microsatellites and amplified fragment length polymorphism (AFLPs), with AFLPs used more frequently in plants than animals. Analysis methods were dominated by Mantel and assignment tests. We also assessed differences among journals to evaluate the uniformity of reporting and publication standards. Few studies presented an explicit study design or explicit descriptions of spatial extent. While some landscape variables such as topographic relief affected most species studied, effects were not universal, and some species appeared unaffected by the landscape. Effects of habitat fragmentation were mixed, with some species altering movement paths and others unaffected. Taken together, although some generalities emerged regarding effects of specific landscape variables, results varied, thereby reinforcing the need for species-specific work. We conclude by: highlighting gaps in knowledge and methodology, providing guidelines to authors and reviewers of landscape genetics studies, and suggesting promising future directions of inquiry.",
"title": ""
},
{
"docid": "7f6a45292aeca83bebb9556c938e0782",
"text": "Many methods of text summarization combining sentence selection and sentence compression have recently been proposed. Although the dependency between words has been used in most of these methods, the dependency between sentences, i.e., rhetorical structures, has not been exploited in such joint methods. We used both dependency between words and dependency between sentences by constructing a nested tree, in which nodes in the document tree representing dependency between sentences were replaced by a sentence tree representing dependency between words. We formulated a summarization task as a combinatorial optimization problem, in which the nested tree was trimmed without losing important content in the source document. The results from an empirical evaluation revealed that our method based on the trimming of the nested tree significantly improved the summarization of texts.",
"title": ""
},
{
"docid": "08d5c83c7effa92659ea705ad51317e2",
"text": "This article examines cognitive, affective, and behavioral measures of motivation and reviews their use throughout the discipline of experimental social psychology. We distinguish between two dimensions of motivation (outcome-focused motivation and process-focused motivation). We discuss circumstances under which measures may help distinguish between different dimensions of motivation, as well as circumstances under which measures may capture different dimensions of motivation in similar ways. Furthermore, we examine situations in which various measures may capture fluctuations in nonmotivational factors, such as learning or physiological depletion. This analysis seeks to advance research in experimental social psychology by highlighting the need for caution when selecting measures of motivation and when interpreting fluctuations captured by these measures. Motivation – the psychological force that enables action – has long been the object of scientific inquiry (Carver & Scheier, 1998; Festinger, 1957; Fishbein & Ajzen, 1974; Hull, 1932; Kruglanski, 1996; Lewin, 1935; Miller, Galanter, & Pribram, 1960; Mischel, Shoda, & Rodriguez, 1989; Zeigarnik, 1927). Because motivation is a psychological construct that cannot be observed or recorded directly, studying it raises an important question: how to measure motivation? Researchers measure motivation in terms of observable cognitive (e.g., recall, perception), affective (e.g., subjective experience), behavioral (e.g., performance), and physiological (e.g., brain activation) responses and using self-reports. Furthermore, motivation is measured in relative terms: compared to previous or subsequent levels of motivation or to motivation in a different goal state (e.g., salient versus non-salient goal). For example, following exposure to a health-goal prime (e.g., gymmembership card), an individual might be more motivated to exercise now than she was 20minutes ago (before exposure to the prime), or than another person who was not exposed to the same prime. An important aspect of determining how to measure motivation is understanding what type of motivation one is attempting to capture. Thus, in exploring the measures of motivation, the present article takes into account different dimensions of motivation. In particular, we highlight the distinction between the outcome-focused motivation to complete a goal (Brehm & Self, 1989; Locke & Latham, 1990; Powers, 1973) and the process-focused motivation to attend to elements related to the process of goal pursuit – with less emphasis on the outcome. Process-related elements may include using “proper” means during goal pursuit (means-focused motivation; Higgins, Idson, Freitas, Spiegel, & Molden, 2003; Touré-Tillery & Fishbach, 2012) and enjoying the experience of goal pursuit (intrinsic motivation; Deci & Ryan, 1985; Fishbach & Choi, 2012; Sansone & Harackiewicz, 1996; Shah & Kruglanski, 2000). In some cases, particular measures of motivation may help distinguish between these different dimensions of motivation, whereas other measures may not. For example, the measured speed at which a person works on a task can have several interpretations. © 2014 John Wiley & Sons Ltd How to Measure Motivation 329 Working slowly could mean (a) that the individual’s motivation to complete the task is low (outcome-focused motivation); or (b) that her motivation to engage in the task is high such that she is “savoring” the task (intrinsic motivation); or (c) that her motivation to “do it right” and use proper means is high such that she is applying herself (means-focused motivation); or even (d) that she is tired (diminished physiological resources). In this case, additional measures (e.g., accuracy in performance) and manipulations (e.g., task difficulty) may help tease apart these various potential interpretations. Thus, experimental researchers must exercise caution when selecting measures of motivation and when interpreting the fluctuations captured by these measures. This review provides a guide for how to measure fluctuations in motivation in experimental settings. One approach is to ask people to rate their motivation (i.e., “how motivated are you?”). However, such an approach is limited to people’s conscious understanding of their own psychological states and can further be biased by social desirability concerns; hence, research in experimental social psychology developed a variety of cognitive and behavioral paradigms to assess motivation without relying on self-reports. We focus on these objective measures of situational fluctuations in motivation. We note that other fields of psychological research commonly use physiological measures (e.g., brain activation, skin conductance), self-report measures (i.e., motivation scales), or measure motivation as a stable trait. These physiological, self-report, and trait measures of motivation are beyond the scope our review. In the sections that follow, we start with a discussion of measures researchers commonly use to capture motivation. We review cognitive measures such as memory accessibility, evaluations, and perceptions of goal-relevant objects, as well as affective measures such as subjective experience. Next, we examine the use of behavioral measures such as speed, performance, and choice to capture fluctuations in motivational strength. In the third section, we discuss the outcomeand process-focused dimensions of motivation and examine specific measures of process-focused motivation, including measures of intrinsic motivation and means-focused motivation. We then discuss how different measures may help distinguish between the outcomeand process-focused dimensions. In the final section, we explore circumstances under which measures may capture fluctuations in learning and physiological resources, rather than changes in motivation. We conclude with some implications of this analysis for the measurement and study of motivation. Cognitive and Affective Measures of Motivation Experimental social psychologists conceptualize a goal as the cognitive representation of a desired end state (Fishbach & Ferguson, 2007; Kruglanski, 1996). According to this view, goals are organized in associative memory networks connecting each goal to corresponding constructs. Goal-relevant constructs could be activities or objects that contribute to goal attainment (i.e., means; Kruglanski et al., 2002), as well as activities or objects that hinder goal attainment (i.e., temptations; Fishbach, Friedman, & Kruglanski, 2003). For example, the goal to eat healthily may be associated with constructs such as apple, doctor (facilitating means), or French fries (hindering temptation). Cognitive and affective measures of motivation include the activation, evaluation, and perception of these goal-related constructs and the subjective experience they evoke. Goal activation: Memory, accessibility, and inhibition of goal-related constructs Constructs related to a goal can activate or prime the pursuit of that goal. For example, the presence of one’s study partner or the word “exam” in a game of scrabble can activate a student’s academic goal and hence increase her motivation to study. Once a goal is active, Social and Personality Psychology Compass 8/7 (2014): 328–341, 10.1111/spc3.12110 © 2014 John Wiley & Sons Ltd 330 How to Measure Motivation the motivational system prepares the individual for action by activating goal-relevant information (Bargh & Barndollar, 1996; Gollwitzer, 1996; Kruglanski, 1996). Thus, motivation manifests itself in terms of how easily goal-related constructs are brought tomind (i.e., accessibility; Aarts, Dijksterhuis, & De Vries, 2001; Higgins & King, 1981; Wyer & Srull, 1986). The activation and subsequent pursuit of a goal can be conscious, such that one is aware of the cues that led to goal-related judgments and behaviors. This activation can also be non-conscious, such that a one is unaware of the goal prime or that one is even exhibiting goal-related judgments and behaviors. Whether goals are conscious or non-conscious, a fundamental characteristic of goal-driven processes is the persistence of the accessibility of goal-related constructs for as long as the goal is active or until an individual disengages from the goal (Bargh, Gollwitzer, Lee-Chai, Barndollar, & Trotschel, 2001; Goschke & Kuhl, 1993). Upon goal completion, motivation diminishes and accessibility is inhibited (Liberman & Förster, 2000; Marsh, Hicks, & Bink, 1998). This active reduction in accessibility allows individuals to direct their cognitive resources to other tasks at hand without being distracted by thoughts of a completed goal. Thus, motivation can be measured by the degree to which goal-related concepts are accessible inmemory. Specifically, the greater the motivation to pursue/achieve a goal, the more likely individuals are to remember, notice, or recognize concepts, objects, or persons related to that goal. For example, in a classic study, Zeigarnik (1927) instructed participants to perform 20 short tasks, ten of which they did not get a chance to finish because the experimenter interrupted them. At the end of the study, Zeigarnik inferred the strength of motivation by asking participants to recall as many of the tasks as possible. Consistent with the notion that unfulfilled goals are associated with heightened motivational states, whereas fulfilled goals inhibit motivation, the results show that participants recalled more uncompleted tasks (i.e., unfulfilled goals) than completed tasks (i.e., fulfilled goals; the Zeigarnik effect). More recently, Förster, Liberman, and Higgins (2005) replicated these findings; inferring motivation from performance on a lexical decision task. Their study assessed the speed of recognizing – i.e., identifying as words versus non-words –words related to a focal goal prior to (versus after) completing that goal. A related measure of motivation is the inhibition of conflicting constructs. In ",
"title": ""
},
{
"docid": "1557db582fbcf5e17c2b021b6d37b03a",
"text": "Visual codebook based quantization of robust appearance descriptors extracted from local image patches is an effective means of capturing image statistics for texture analysis and scene classification. Codebooks are usually constructed by using a method such as k-means to cluster the descriptor vectors of patches sampled either densely ('textons') or sparsely ('bags of features' based on key-points or salience measures) from a set of training images. This works well for texture analysis in homogeneous images, but the images that arise in natural object recognition tasks have far less uniform statistics. We show that for dense sampling, k-means over-adapts to this, clustering centres almost exclusively around the densest few regions in descriptor space and thus failing to code other informative regions. This gives suboptimal codes that are no better than using randomly selected centres. We describe a scalable acceptance-radius based clusterer that generates better codebooks and study its performance on several image classification tasks. We also show that dense representations outperform equivalent keypoint based ones on these tasks and that SVM or mutual information based feature selection starting from a dense codebook further improves the performance.",
"title": ""
},
{
"docid": "2e976aa51bc5550ad14083d5df7252a8",
"text": "This paper presents a 60-dB gain bulk-driven Miller OTA operating at 0.25-V power supply in the 130-nm digital CMOS process. The amplifier operates in the weak-inversion region with input bulk-driven differential pair sporting positive feedback source degeneration for transconductance enhancement. In addition, the distributed layout configuration is used for all the transistors to mitigate the effect of halo implants for higher output impedance. Combining these two approaches, we experimentally demonstrate a high gain of over 60-dB with just 18-nW power consumption from 0.25-V power supply. The use of enhanced bulk-driven differential pair and distributed layout can help overcome some of the constraints imposed by nanometer CMOS process for high performance analog circuits in weak inversion region.",
"title": ""
},
{
"docid": "36e8ecc13c1f92ca3b056359e2d803f0",
"text": "We propose a novel module, the reviewer module, to improve the encoder-decoder learning framework. The reviewer module is generic, and can be plugged into an existing encoder-decoder model. The reviewer module performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a fact vector after each review step; the fact vectors are used as the input of the attention mechanism in the decoder. We show that the conventional encoderdecoders are a special case of our framework. Empirically, we show that our framework can improve over state-of-the-art encoder-decoder systems on the tasks of image captioning and source code captioning.",
"title": ""
},
{
"docid": "b9cf32ef9364f55c5f59b4c6a9626656",
"text": "Graph-based methods have gained attention in many areas of Natural Language Processing (NLP) including Word Sense Disambiguation (WSD), text summarization, keyword extraction and others. Most of the work in these areas formulate their problem in a graph-based setting and apply unsupervised graph clustering to obtain a set of clusters. Recent studies suggest that graphs often exhibit a hierarchical structure that goes beyond simple flat clustering. This paper presents an unsupervised method for inferring the hierarchical grouping of the senses of a polysemous word. The inferred hierarchical structures are applied to the problem of word sense disambiguation, where we show that our method performs significantly better than traditional graph-based methods and agglomerative clustering yielding improvements over state-of-the-art WSD systems based on sense induction.",
"title": ""
},
{
"docid": "c01fe3b589479fb14568ce1e00a08125",
"text": "Purpose – The purpose of this paper is to propose a model to examine the impact of organizational support on behavioral intention (BI) regarding enterprise resource planning (ERP) implementation based on the technology acceptance model (TAM). Design/methodology/approach – A research model is proposed which describes the effects of organizational support, both formal and informal, on factors of TAM. A survey questionnaire is developed to test the proposed model. A total of 700 of questionnaires are distributed to users in small and medium enterprises that have implemented ERP systems in Korea and 209 responses are used for analyses. Structural equation modeling is employed to test the research hypotheses. Findings – The results indicate that the organizational support is an important factor for perceived usefulness (PU) and perceived ease of use (PEOU). PU and PEOU seem to lead to a higher level of interest in the ERP system and BI to use the system. The most notable finding of our study is that organizational support is positively associated with factors of TAM. Research limitations/implications – The survey data used in this paper are collected from smalland medium-sized companies in South Korea. Thus, the respondents in these firms might have been trained at different levels or on different modules of ERP, which would yield diversity in subject experience with different ERP systems. Originality/value – To improve the efficiency and effectiveness of ERP implementation in a real world environment, organizations need to better understand user satisfaction. The TAM model provides a theoretical construct to explain how user satisfaction is affected.",
"title": ""
},
{
"docid": "b4abab79e652bb4d6d3ea31df81ebd40",
"text": "Humor is an integral part of our day-to-day communication making it interesting and plausible. The growing demand in robotics and the constant urge for machines to pass the Turing test to eliminate the thin line difference between human and machine behavior make it essential for machines to be able to use humor in communication. Moreover, Learning is a continuous process and very important at every stage of life. However sometimes merely reading from a book induces lassitude and lack of concentration may hamper strong foundation building. Children suffering from Autism Spectrum Disorder (ASD) suffer from slow learning and grasping issues. English being a funny language, a particular word has multiple meanings, making it difficult for children with ASD to cognize it. Solving riddles induces fast learning and sharpness in children including those affected by ASD. The existing systems however, are too far from being used in any practical application. This paper proposes a system that uses core ideas of JAPE to create puns for entertainment and vocabulary building purpose for children. General Terms Homophone: Two or more words having the same pronunciation but different meanings, origins, or spelling (e.g. new and knew) [4]. Homonym: Two or more words having the same spelling or pronunciation but different meanings and origins (e.g. pole. and pole) [5]. Rhyming words: Words that have the same ending sounds. E.g. are cat, hat, bat, mat, fat and rat [6]. Punning words: A form of word play that suggests two or more meanings, by exploiting multiple meanings of words, or of similar-sounding words, for an intended humorous or rhetorical effect [7]. Pun generator: A system that uses punning words to generate riddles/jokes with an intention of making it humorous.",
"title": ""
},
{
"docid": "ae97effd4e999ccf580d32c8522b6f59",
"text": "Eight isolates of cellulose-degrading bacteria (CDB) were isolated from four different invertebrates (termite, snail, caterpillar, and bookworm) by enriching the basal culture medium with filter paper as substrate for cellulose degradation. To indicate the cellulase activity of the organisms, diameter of clear zone around the colony and hydrolytic value on cellulose Congo Red agar media were measured. CDB 8 and CDB 10 exhibited the maximum zone of clearance around the colony with diameter of 45 and 50 mm and with the hydrolytic value of 9 and 9.8, respectively. The enzyme assays for two enzymes, filter paper cellulase (FPC), and cellulase (endoglucanase), were examined by methods recommended by the International Union of Pure and Applied Chemistry (IUPAC). The extracellular cellulase activities ranged from 0.012 to 0.196 IU/mL for FPC and 0.162 to 0.400 IU/mL for endoglucanase assay. All the cultures were also further tested for their capacity to degrade filter paper by gravimetric method. The maximum filter paper degradation percentage was estimated to be 65.7 for CDB 8. Selected bacterial isolates CDB 2, 7, 8, and 10 were co-cultured with Saccharomyces cerevisiae for simultaneous saccharification and fermentation. Ethanol production was positively tested after five days of incubation with acidified potassium dichromate.",
"title": ""
}
] |
scidocsrr
|
2622f89367bdbe4f1176b5d758fb50a1
|
Intelligent churn prediction in telecom: employing mRMR feature selection and RotBoost based ensemble classification
|
[
{
"docid": "3b886932b4b036ec4e9ceafc5066397b",
"text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.11.083 E-mail address: [email protected] 1 PLN is the abbreviation of the Polish currency unit In this article, we test the usefulness of the popular data mining models to predict churn of the clients of the Polish cellular telecommunication company. When comparing to previous studies on this topic, our research is novel in the following areas: (1) we deal with prepaid clients (previous studies dealt with postpaid clients) who are far more likely to churn, are less stable and much less is known about them (no application, demographical or personal data), (2) we have 1381 potential variables derived from the clients’ usage (previous studies dealt with data with at least tens of variables) and (3) we test the stability of models across time for all the percentiles of the lift curve – our test sample is collected six months after the estimation of the model. The main finding from our research is that linear models, especially logistic regression, are a very good choice when modelling churn of the prepaid clients. Decision trees are unstable in high percentiles of the lift curve, and we do not recommend their usage. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8589ec481e78d14fbeb3e6e4205eee50",
"text": "This paper presents a novel ensemble classifier generation technique RotBoost, which is constructed by combining Rotation Forest and AdaBoost. The experiments conducted with 36 real-world data sets available from the UCI repository, among which a classification tree is adopted as the base learning algorithm, demonstrate that RotBoost can generate ensemble classifiers with significantly lower prediction error than either Rotation Forest or AdaBoost more often than the reverse. Meanwhile, RotBoost is found to perform much better than Bagging and MultiBoost. Through employing the bias and variance decompositions of error to gain more insight of the considered classification methods, RotBoost is seen to simultaneously reduce the bias and variance terms of a single tree and the decrement achieved by it is much greater than that done by the other ensemble methods, which leads RotBoost to perform best among the considered classification procedures. Furthermore, RotBoost has a potential advantage over AdaBoost of suiting parallel execution. 2008 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "1313fbdd0721b58936a05da5080239df",
"text": "Bug tracking systems are valuable assets for managing maintenance activities. They are widely used in open-source projects as well as in the software industry. They collect many different kinds of issues: requests for defect fixing, enhancements, refactoring/restructuring activities and organizational issues. These different kinds of issues are simply labeled as \"bug\" for lack of a better classification support or of knowledge about the possible kinds.\n This paper investigates whether the text of the issues posted in bug tracking systems is enough to classify them into corrective maintenance and other kinds of activities.\n We show that alternating decision trees, naive Bayes classifiers, and logistic regression can be used to accurately distinguish bugs from other kinds of issues. Results from empirical studies performed on issues for Mozilla, Eclipse, and JBoss indicate that issues can be classified with between 77% and 82% of correct decisions.",
"title": ""
},
{
"docid": "4961d8c16f2376834c19336a230f61b3",
"text": "UNLABELLED\nPramipexole is an orally active non-ergoline dopamine agonist with selective activity at dopamine receptors belonging to the D2 receptor subfamily (D2, D3, D4 receptor subtypes) and with preferential affinity for the D3 receptor subtype. It is approved as monotherapy in early Parkinson's disease and as adjunctive therapy to levodopa in patients with advanced disease experiencing motor effects because of diminished response to levodopa. The potential neuroprotective effects of pramipexole have been shown in animal and in vitro studies. Data from relatively long term (10- or 31-week) studies suggest that pramipexole monotherapy (0.375 to 6.0 mg/day) can improve activities of daily living and motor symptoms in patients with early Parkinson's disease. Pramipexole (0.375 to 4.5 mg/day for 31 or 36 weeks), as an adjunct to levodopa in advanced disease, improved activities of daily living and motor symptoms, reduced the duration and severity of 'off' periods and allowed a reduction in levodopa dosage. Mentation, behaviour and mood [Unified Parkinson's Disease Rating Scale (UPDRS) part I], and timed walking test were not significantly improved. The extent of disability improved according to the UPDRS parts II and III but, when assessed by secondary efficacy parameters, it is unclear whether disability or the severity of disease improved. No significant differences were observed in patients randomised to pramipexole or bromocriptine according to a secondary hypothesis in a prospective study in which both drugs were better than placebo. Some quality-of-life measures improved with active treatment relative to placebo. Further studies comparing pramipexole with other dopamine agonists and levodopa in patients with early and advanced Parkinson's disease would be useful. In pramipexole recipients with early disease, the most commonly experienced adverse events were nausea, dizziness, somnolence, insomnia, constipation, asthenia and hallucinations. The most commonly reported adverse events in pramipexole recipients with advanced disease were orthostatic hypotension, dyskinesias, extrapyramidal syndrome (defined as a worsening of the Parkinson's disease), dizziness, hallucinations, accidental injury, dream abnormalities, confusion, constipation, asthenia, somnolence, dystonia, gait abnormality, hypertonia, dry mouth, amnesia and urinary frequency. The incidence of some adverse events did not greatly differ between pramipexole and placebo recipients.\n\n\nCONCLUSIONS\nPramipexole is effective as adjunctive therapy to levodopa in patients with advanced Parkinson's disease. However, the potential beneficial effects of pramipexole on disease progression need to be confirmed in clinical studies. The efficacy of pramipexole monotherapy in patients with early disease has also been demonstrated, although the use of dopamine agonists in early Parkinson's disease remains controversial.",
"title": ""
},
{
"docid": "1a2f2e75691e538c867b6ce58591a6a5",
"text": "Despite the profusion of NIALM researches and products using complex algorithms, addressing the market for low cost, compact, real-time and effective NIALM smart meters is still a challenge. This paper talks about the design of a NIALM smart meter for home appliances, with the ability to self-detect and disaggregate most home appliances. In order to satisfy the compact, real-time, low price requirements and to solve the challenge in slow transient and multi-state appliances, two algorithms are used: the CUSUM to improve the event detection and the Genetic Algorithm (GA) for appliance disaggregation. Evaluation of these algorithms has been done according to public NIALM REDD data set [6]. They are now in first stage of architecture design using Labview FPGA methodology. KeywordsNIALM, CUSUM, Genetic Algorithm, K-mean, classification, smart meter, FPGA.",
"title": ""
},
{
"docid": "30045d9e8153110926a0157c0cdcebf3",
"text": "The self-oscillating flyback converter is a popular circuit for cost-sensitive applications due to its simplicity and low component count. It is widely employed in mobile phone chargers and as the stand-by power source in off-line power supplies for data-processing equipment. However, this circuit was almost not explored for supplying power LEDs. This paper presents a self-oscillating flyback driver for supplying Power LEDs directly, with no additional circuit. A simplified mathematical model of the LED was used to characterize the self-oscillating converter for driving the power LEDs. With the proposed converter the LEDs manufacturing tolerances and drifts over temperature presents little to no influence over the LED average current. This is proved by using the LED electrical model on the analysis.",
"title": ""
},
{
"docid": "b1f0b80c51af4c146495eb2b1e3b9ba9",
"text": "This paper presents an average current mode buck dimmable light-emitting diode (LED) driver for large-scale single-string LED backlighting applications. The proposed integrated current control technique can provide exact current control signals by using an autozeroed integrator to enhance the accuracy of the average current of LEDs while driving a large number of LEDs. Adoption of discontinuous low-side current sensing leads to power loss reduction. Adoption of a fast-settling technique allows the LED driver to enter into the steady state within three switching cycles after the dimming signal is triggered. Implemented in a 0.35-μm HV CMOS process, the proposed LED driver achieves 1.7% LED current error and 98.16% peak efficiency over an input voltage range of 110 to 200 V while driving 30 to 50 LEDs.",
"title": ""
},
{
"docid": "9cf3df49790c1d2107035ef868f8be1e",
"text": "As computational thinking becomes a fundamental skill for the 21st century, K-12 teachers should be exposed to computing principles. This paper describes the implementation and evaluation of a computational thinking module in a required course for elementary and secondary education majors. We summarize the results from open-ended and multiple-choice questionnaires given both before and after the module to assess the students' attitudes toward and understanding of computational thinking. The results suggest that given relevant information about computational thinking, education students' attitudes toward computer science becomes more favorable and they will be more likely to integrate computing principles in their future teaching.",
"title": ""
},
{
"docid": "1c7a666c905aeb6842d41233d52a64b7",
"text": "Development of movement assistance devices used in rehabilitation must take into account the variable human-machine interaction. To this end, electromyography (EMG) sensors detecting the muscle activation are regarded as a feasible method. In this paper, we focus on torque estimation of the human knee joint. The joint motion analysis was captured through the developed EMG sensor, and subsequently torque estimation by using an EMG-driven model. For calibrating the EMG-driven model, the experimental torque was computed by employing the inverse dynamics that have been ensured within a well-established leg-orthosis system. Here, an impedance-controlled variable stiffness actuator has been implemented. To realize a wide range of calibration tasks, we compared the knee torque by the EMG-driven model and the inverse dynamics through setting different impedance controller gains. The proposed calibration results will provide a support towards the applications in the bio-feedback based impedance control.",
"title": ""
},
{
"docid": "1db0dfb511f5ebad462880c6562404ec",
"text": "In this paper, we propose quantized densely connected UNets for efficient visual landmark localization. The idea is that features of the same semantic meanings are globally reused across the stacked U-Nets. This dense connectivity largely improves the information flow, yielding improved localization accuracy. However, a vanilla dense design would suffer from critical efficiency issue in both training and testing. To solve this problem, we first propose order-K dense connectivity to trim off long-distance shortcuts; then, we use a memory-efficient implementation to significantly boost the training efficiency and investigate an iterative refinement that may slice the model size in half. Finally, to reduce the memory consumption and high precision operations both in training and testing, we further quantize weights, inputs, and gradients of our localization network to low bit-width numbers. We validate our approach in two tasks: human pose estimation and face alignment. The results show that our approach achieves state-of-the-art localization accuracy, but using ∼70% fewer parameters, ∼98% less model size and saving ∼75% training memory compared with other benchmark localizers. The code is available at https://github.com/zhiqiangdon/CU-Net.",
"title": ""
},
{
"docid": "abe729a351eb9dbc1688abe5133b28c2",
"text": "C. H. Tian B. K. Ray J. Lee R. Cao W. Ding This paper presents a framework for the modeling and analysis of business model designs involving a network of interconnected business entities. The framework includes an ecosystem-modeling component, a simulation component, and a serviceanalysis component, and integrates methods from value network modeling, game theory analysis, and multiagent systems. A role-based paradigm is introduced for characterizing ecosystem entities in order to easily allow for the evolution of the ecosystem and duplicated functionality for entities. We show how the framework can be used to provide insight into value distribution among the entities and evaluation of business model performance under different scenarios. The methods are illustrated using a case study of a retail business-to-business service ecosystem.",
"title": ""
},
{
"docid": "57aaa47e45e8542767e327cf683288cf",
"text": "Mobile edge computing usually uses caching to support multimedia contents in 5G mobile Internet to reduce the computing overhead and latency. Mobile edge caching (MEC) systems are vulnerable to various attacks such as denial of service attacks and rogue edge attacks. This article investigates the attack models in MEC systems, focusing on both the mobile offloading and the caching procedures. In this article, we propose security solutions that apply reinforcement learning (RL) techniques to provide secure offloading to the edge nodes against jamming attacks. We also present lightweight authentication and secure collaborative caching schemes to protect data privacy. We evaluate the performance of the RL-based security solution for mobile edge caching and discuss the challenges that need to be addressed in the future.",
"title": ""
},
{
"docid": "fef5bf498eb0da7a62a2bc1433e9bd5f",
"text": "The “CRC Handbook” is well-known to anyone who has taken a college chemistry course, and CRC Press has traded on this name-familiarity to greatly expand its “Handbook” series. One of the newest entries to join titles such as the Handbook of Combinatorial Designs, the Handbook of Exact Solutions to Ordinary Differential Equations and the Handbook of Edible Weeds, is the Handbook of Graph Theory. Its editors will be familiar to many as the authors of the textbook, Graph Theory and Its Applications, which is also published by CRC Press. The handbooks about mathematics typically strive for comprehensiveness in a concise style, with sections contributed by specialists within subdisciplines. This volume runs to 1167 pages with 60 contributors providing 54 sections, organized into 11 chapters. As an indication of the topics covered, the chapter titles are Introduction to Graphs; Graph Representation; Directed Graphs; Connectivity and Traversability; Colorings and Related Topics; Algebraic Graph Theory; Topological Graph Theory; Analytic Graph Theory; Graphical Measurement; Graphs in Computer Science; Networks and Flows. Each section is organized into subsections that begin with the basic definitions and ideas, provide a few key examples and conclude with a list of facts (theorems) and remarks. Each of these items is referenced with a label (e.g. 7.7.3.F29 is the 29th Fact of Section 7.7, and can be found in Subsection 7.7.3). This makes for easy crossreferencing within the volume, and provides an easy reference system for the reader’s own use. Sections conclude with references to monographs and important research articles. And on occasion there are conjectures or open problems listed too. The author of every section has provided a glossary, which the editors have coalesced into separate glossaries for each of the eleven chapters. The editors have also strived for uniform terminology and notation throughout, and where this is impossible, the distinctions, subtleties or conflicts between subdisciplines have been carefully highlighted. These types of handbooks shine when one cannot remember that the Ramsey number R(5, 14) is only known to be bounded between 221 and 1280, or one cannot recall (or never knew) what an irredundance number is. For these sorts of questions, the believable claim of 90% content coverage should guarantee frequent success when it is consulted. The listed facts never include any proofs, and many do not include any reference to the literature. Presumably some of them are trivialities, but they could all use some pointer to where one can find a proof. The editors are proud of how long the bibliographies are, but sometimes they are too short. In most every case, there could be more guidance about which elements of the bibliography are the most useful for further general investigations into a topic. An advanced graduate student or researcher of graph theory will find a book of this sort invaluable. Within their specialty the coverage might be considered skimpy. However, for those occasions when ideas or results from an allied specialty are of interest, or only if one is curious about exactly what some topic involves, or what is known about it, then consulting this volume will answer many simple questions quickly. Similarly, someone in a related discipline, such as cryptography or computer science, whose work requires some knowledge of the state-of-the-art in graph theory, will also find this a good volume to consult for quick, easily located, answers. Given that it summarizes a field where over 1,000 papers are published each year, it is a must-have for the well-equipped mathematics research library.",
"title": ""
},
{
"docid": "6fab26c4c8fa05390aa03998a748f87d",
"text": "Click prediction is one of the fundamental problems in sponsored search. Most of existing studies took advantage of machine learning approaches to predict ad click for each event of ad view independently. However, as observed in the real-world sponsored search system, user’s behaviors on ads yield high dependency on how the user behaved along with the past time, especially in terms of what queries she submitted, what ads she clicked or ignored, and how long she spent on the landing pages of clicked ads, etc. Inspired by these observations, we introduce a novel framework based on Recurrent Neural Networks (RNN). Compared to traditional methods, this framework directly models the dependency on user’s sequential behaviors into the click prediction process through the recurrent structure in RNN. Large scale evaluations on the click-through logs from a commercial search engine demonstrate that our approach can significantly improve the click prediction accuracy, compared to sequence-independent approaches.",
"title": ""
},
{
"docid": "2a51a83158265a1695ba51d9acef5c48",
"text": "The eggs (nits) of head and body lice (Pediculus humanus capitis, Pediculus humanus corporis) were incubated for 5, 10, 15, 20, 30 or 45 min into a neem seed extract contained in a fine shampoo formulation (e.g. Wash Away® Louse), which is known for its significant killing effects of larvae and adults of head lice. The aim of the study was to test whether the developmental stages inside the eggs are also killed after the incubation into the shampoo. It was found that an incubation time of only 5 min was sufficient to prohibit any hatching of larvae, whilst 93 ± 4% of the larvae in the untreated controls of body lice hatched respectively about 76% of the controls in the case of head lice. Apparently, the neem-based shampoo blocked the aeropyles of the eggs, thus preventing the embryos of both races of lice from accessing oxygen and from releasing carbon dioxide. Thus, this product offers a complete cure from head lice upon a single treatment, if the lice (motile stages, eggs) are fully covered for about 10 min.",
"title": ""
},
{
"docid": "bfde0c836406a25a08b7c95b330aaafa",
"text": "The concept of agile process models has gained great popularity in software (SW) development community in past few years. Agile models promote fast development. This property has certain drawbacks, such as poor documentation and bad quality. Fast development promotes use of agile process models in small-scale projects. This paper modifies and evaluates extreme programming (XP) process model and proposes a novel adaptive process mode based on these modifications. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "81614a61e55b66d5cfc078bc7c8b88fc",
"text": "Navigation assistance for visually impaired (NAVI) refers to systems that are able to assist or guide people with vision loss, ranging from partially sighted to totally blind, by means of sound commands. Many researchers are working to assist visually impaired people in different ways like voice based assistance, ultrasonic based assistance, camera based assistance and in some advance way researchers are trying to give transplantation of real eyes with robotic eyes which can capable enough to plot the real image over patient retina using some biomedical technologies. In other way creating a fusion of sensing technology and voice based guidance system some of the products were developed which could give better result than individual technology. There are some limitation in system like obstacle detection which could not see the object but detection the object and camera based system can't work properly in different light level so the proposed system is a fusion of color sensing sensor and the obstacle sensor along with the voice based assistance system. The main idea of the proposed system to make person aware of path he is walking and also the obstacle in the path.",
"title": ""
},
{
"docid": "972ee7027c71364e8fe1894088f79d8a",
"text": "A fully integrated output capacitor-less, nMOS regulation FET low-dropout (LDO) regulator with fast transient response for system-on-chip power regulation applications is presented. The error amplifier (EA) consists of a differential cross-coupled common-gate (CG) input stage achieving twice the transconductance and unity-gain-bandwidth in comparison to a conventional differential common-source stage. The low input resistance of the CG EA improves stability of the LDO over a wide range of load currents. The LDO employs a current-reused dynamic biasing technique to further improve the load transient response, with no extra quiescent current. It is designed and fabricated in a 0.18-<inline-formula> <tex-math notation=\"LaTeX\">${\\mu }\\text{m}$ </tex-math></inline-formula> CMOS technology for an input voltage range of 1.6–1.8 V, and an output voltage range of 1.4–1.6 V. Measured undershoot is 158 mV and settling time is 20 ns for 9–40 mA load change in 250 ps edge-time with zero load capacitance. The LDO core consumes 130 <inline-formula> <tex-math notation=\"LaTeX\">${\\mu }\\text{A}$ </tex-math></inline-formula> of quiescent current, occupies 0.21 mm<sup>2</sup> die area, and sustains 0–50 pF of on-chip load capacitance.",
"title": ""
},
{
"docid": "07ed58a5c4fdd926924ad2590ff33113",
"text": "The number field sieve is an algorithm to factor integers of the form r e ± s for small positive r and s . This note is intended as a ‘report on work in progress’ on this algorithm. We informally describe the algorithm, discuss several implementation related aspects, and present some of the factorizations obtained so far. We also mention some solutions to the problems encountered when generalizing the algorithm to general integers using an idea of Buhler and Pomerance. It is not unlikely that this leads to a general purpose factoring algorithm that is asymptotically substantially faster than the fastest factoring algorithms known so far, like the multiple polynomial quadratic sieve.",
"title": ""
},
{
"docid": "8a3e49797223800cb644fe2b819f9950",
"text": "In this paper, we present machine learning approaches for characterizing and forecasting the short-term demand for on-demand ride-hailing services. We propose the spatio-temporal estimation of the demand that is a function of variable effects related to traffic, pricing and weather conditions. With respect to the methodology, a single decision tree, bootstrap-aggregated (bagged) decision trees, random forest, boosted decision trees, and artificial neural network for regression have been adapted and systematically compared using various statistics, e.g. R-square, Root Mean Square Error (RMSE), and slope. To better assess the quality of the models, they have been tested on a real case study using the data of DiDi Chuxing, the main on-demand ride-hailing service provider in China. In the current study, 199,584 time-slots describing the spatio-temporal ride-hailing demand has been extracted with an aggregated-time interval of 10 mins. All the methods are trained and validated on the basis of two independent samples from this dataset. The results revealed that boosted decision trees provide the best prediction accuracy (RMSE=16.41), while avoiding the risk of over-fitting, followed by artificial neural network (20.09), random forest (23.50), bagged decision trees (24.29) and single decision tree (33.55). ∗Currently under review for publication †Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium, Email: [email protected] ‡Laboratory of Innovations in Transportation (LITrans), Department of Civil, Geotechnical, and Mining Engineering, Polytechnique Montréal, Montréal, Canada, Email: [email protected] §Laboratory of Innovations in Transportation (LITrans), Department of Civil, Geotechnical, and Mining Engineering, Polytechnique Montréal, Montréal, Canada, Email: [email protected] ¶Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium ‖Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium ar X iv :1 70 3. 02 43 3v 1 [ cs .L G ] 7 M ar 2 01 7",
"title": ""
},
{
"docid": "9f469cdc1864aad2026630a29c210c1f",
"text": "This paper proposes an asymptotically optimal hybrid beamforming solution for large antenna arrays by exploiting the properties of the singular vectors of the channel matrix. It is shown that the elements of the channel matrix with Rayleigh fading follow a normal distribution when large antenna arrays are employed. The proposed beamforming algorithm is effective in both sparse and rich propagation environments, and is applicable for both point-to-point and multiuser scenarios. In addition, a closed-form expression and a lower bound for the achievable rates are derived when analog and digital phase shifters are employed. It is shown that the performance of the hybrid beamformers using phase shifters with more than 2-bit resolution is comparable with analog phase shifting. A novel phase shifter selection scheme that reduces the power consumption at the phase shifter network is proposed when the wireless channel is modeled by Rayleigh fading. Using this selection scheme, the spectral efficiency can be increased as the power consumption in the phase shifter network reduces. Compared with the scenario that all of the phase shifters are in operation, the simulation results indicate that the spectral efficiency increases when up to 50% of phase shifters are turned OFF.",
"title": ""
},
{
"docid": "502096a6816073d5a8c08f4c82de11fe",
"text": "Microfluidics is a flourishing field, enabling a wide range of biochemical and clinical applications such as cancer screening, micro-physiological system engineering, high-throughput drug testing, and point-of-care diagnostics. However, fabrication of microfluidic devices is often complicated, time consuming, and requires expensive equipment and sophisticated cleanroom facilities. Three-dimensional (3D) printing presents a promising alternative to traditional techniques such as lithography and PDMS-glass bonding, not only by enabling rapid design iterations in the development stage, but also by reducing the costs associated with institutional infrastructure, equipment installation, maintenance, and physical space. With the recent advancements in 3D printing technologies, highly complex microfluidic devices can be fabricated via single-step, rapid, and cost-effective protocols, making microfluidics more accessible to users. In this review, we discuss a broad range of approaches for the application of 3D printing technology to fabrication of micro-scale lab-on-a-chip devices.",
"title": ""
}
] |
scidocsrr
|
9765578b50fc821f8d90b55e6d8aced4
|
Block arrivals in the Bitcoin blockchain
|
[
{
"docid": "6ab1bc5fced659803724f2f7916be355",
"text": "Statistical Analysis of a Telephone Call Center Lawrence Brown, Noah Gans, Avishai Mandelbaum, Anat Sakov, Haipeng Shen, Sergey Zeltyn and Linda Zhao Lawrence Brown is Professor, Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104 . Noah Gans is Associate Professor, Department of Operations and Information Management, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104 . Avishai Mandelbaum is Professor, Faculty of Industrial Engineering and Management, Technion, Haifa, Israel . Anat Sakov is Postdoctoral Fellow, Tel-Aviv University, Tel-Aviv, Israel . Haipeng Shen is Assistant Professor, Department of Statistics, University of North Carolina, Durham, NC 27599 . Sergey Zeltyn is Ph.D. Candidate, Faculty of Industrial Engineering and Management, Technion, Haifa, Israel . Linda Zhao is Associate Professor, Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104 . This work was supported by National Science Foundation DMS-99-71751 and DMS-99-71848, the Sloane Foundation, Israeli Science Foundation grants 388/99 and 126/02, the Wharton Financial Institutions Center, and Technion funds for the promotion of research and sponsored research. Version of record first published: 31 Dec 2011.",
"title": ""
},
{
"docid": "f181c3fe17392239e5feaef02c37dd11",
"text": "We present a formal model of synchronous processes without distinct identifiers (i.e., anonymous processes) that communicate using one-way public broadcasts. Our main contribution is a proof that the Bitcoin protocol achieves consensus in this model, except for a negligible probability, when Byzantine faults make up less than half the network. The protocol is scalable, since the running time and message complexity are all independent of the size of the network, instead depending only on the relative computing power of the faulty processes. We also introduce a requirement that the protocol must tolerate an arbitrary number of passive clients that receive broadcasts but can not send. This leads to a tight 2f + 1 resilience bound.",
"title": ""
}
] |
[
{
"docid": "b2911f3df2793066dde1af35f5a09d62",
"text": "Cloud computing is drawing attention from both practitioners and researchers, and its adoption among organizations is on the rise. The focus has mainly been on minimizing fixed IT costs and using the IT resource flexibility offered by the cloud. However, the promise of cloud computing is much greater. As a disruptive technology, it enables innovative new services and business models that decrease time to market, create operational efficiencies and engage customers and citizens in new ways. However, we are still in the early days of cloud computing, and, for organizations to exploit the full potential, we need knowledge of the potential applications and pitfalls of cloud computing. Maturity models provide effective methods for organizations to assess, evaluate, and benchmark their capabilities as bases for developing roadmaps for improving weaknesses. Adopting the business-IT maturity model by Pearlson & Saunders (2007) as analytical framework, we synthesize the existing literature, identify levels of cloud computing benefits, and establish propositions for practice in terms of how to realize these benefits.",
"title": ""
},
{
"docid": "8b63800da2019180d266297647e3dbc0",
"text": "Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the class-probability distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. A central idea is the concept of context: a set of contiguous examples where the distribution is stationary. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error wil decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example kw, and the drift level at example kd. This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since kw. The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and also with learning the new concept. We also observe that the method is independent of the learning algorithm.",
"title": ""
},
{
"docid": "e900bd24f24f5b6c4ec1cab2fac5ce45",
"text": "The recent emergence of lab-on-a-chip (LoC) technology has led to a paradigm shift in many healthcare-related application areas, e.g., point-of-care clinical diagnostics, high-throughput sequencing, and proteomics. A promising category of LoCs is digital microfluidic (DMF)-based biochips, in which nanoliter-volume fluid droplets are manipulated on a 2-D electrode array. A key challenge in designing such chips and mapping lab-bench protocols to a LoC is to carry out the dilution process of biochemical samples efficiently. As an optimization and automation technique, we present a dilution/mixing algorithm that significantly reduces the production of waste droplets. This algorithm takes O(n) time to compute at most n sequential mix/split operations required to achieve any given target concentration with an error in concentration factor less than [1/(2n)]. To implement the algorithm, we design an architectural layout of a DMF-based LoC consisting of two O(n)-size rotary mixers and O(n) storage electrodes. Simulation results show that the proposed technique always yields nonnegative savings in the number of waste droplets and also in the total number of input droplets compared to earlier methods.",
"title": ""
},
{
"docid": "34bd41f7384d6ee4d882a39aec167b3e",
"text": "This paper presents a robust feedback controller for ball and beam system (BBS). The BBS is a nonlinear system in which a ball has to be balanced on a particular beam position. The proposed nonlinear controller designed for the BBS is based upon Backstepping control technique which guarantees the boundedness of tracking error. To tackle the unknown disturbances, an external disturbance estimator (EDE) has been employed. The stability analysis of the overall closed loop robust control system has been worked out in the sense of Lyapunov theory. Finally, the simulation studies have been done to demonstrate the suitability of proposed scheme.",
"title": ""
},
{
"docid": "e4007c7e6a80006238e1211a213e391b",
"text": "Various techniques for multiprogramming parallel multiprocessor systems have been proposed recently as a way to improve performance. A natural approach is to divide the set of processing elements into independent partitions, and simultaneously execute a diierent parallel program in each partition. Several issues arise, including the determination of the optimal number of programs allowed to execute simultaneously (i.e., the number of partitions) and the corresponding partition sizes. This can be done statically, dynamically, or adaptively, depending on the system and workload characteristics. In this paper several adaptive partitioning policies are evaluated. Their behavior, as well as the behavior of static policies, is investigated using real parallel programs. The policy applicability to actual systems is addressed, and implementation results of the proposed policies on an iPSC/2 hypercube system are reported. The concept of robustness (i.e., the ability to perform well on a wide range of workload types over a wide range of arrival rates) is presented and quantiied. Relative rankings of the policies are obtained, depending on the speciic work-load characteristics. A trade-oo is shown between potential performance and the amount of knowledge of the workload characteristics required to select the best policy. A policy that performs best when such knowledge of workload parallelism and/or arrival rate is not available is proposed as the most robust of those analyzed.",
"title": ""
},
{
"docid": "94b061285a0ca52aa0e82adcca392416",
"text": "Stochastic convex optimization is a basic and well studied primitive in machine learning. It is well known that convex and Lipschitz functions can be minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which updates according to the direction of the gradients, rather than the gradients themselves. In this paper we analyze a stochastic version of NGD and prove its convergence to a global minimum for a wider class of functions: we require the functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens the concept of unimodality to multidimensions and allows for certain types of saddle points, which are a known hurdle for first-order optimization methods such as gradient descent. Locally-Lipschitz functions are only required to be Lipschitz in a small region around the optimum. This assumption circumvents gradient explosion, which is another known hurdle for gradient descent variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic normalized gradient descent algorithm provably requires a minimal minibatch size.",
"title": ""
},
{
"docid": "e71d55a573426068fab2212a55bc3682",
"text": "In this article we present a theoretical approach to cognitive control and attention modulation, as well as review studies related to such a view, using an auditory task based on dichotic presentations of simple consonant-vowel syllables. The reviewed work comes out of joint research efforts by the 'Attention-node' at the 'Nordic Center of Excellence in Cognitive Control'. We suggest a new way of defining degrees of cognitive control based on systematically varying the stimulus intensity of the right or left ear dichotic stimulus, thus parametrically varying the degree of stimulus interference and conflict when assessing the amount of cognitive control necessary to resolve the interference. We first present an overview and review of previous studies using the so-called \"forced-attention\" dichotic listening paradigm. We then present behavioral and neuroimaging data to explore the suggested cognitive control model, with examples from normal adults, clinical and special ability groups.",
"title": ""
},
{
"docid": "0c60255bd78597a6389852fc34bab1c4",
"text": "The interaction between indomethacin and human serum albumin (HSA) was investigated by fluorescence quenching technique and UV-vis absorption spectroscopy. The results of fluorescence titration revealed that indomethacin, strongly quench the intrinsic fluorescence of HSA by static quenching and nonradiative energy transfer. The binding site number n and the apparent binding constant K(A), were calculated using linear and nonlinear fit to the experimental data. The distance r between donor (HSA) and acceptor (indomethacin) was obtained according to fluorescence resonance energy transfer (FRET). The study suggests that the donor and the acceptor are bound at different locations but within the quenching distance.",
"title": ""
},
{
"docid": "a1a2c3f62bd2923fc317fcda8c907196",
"text": "Hardware intellectual-property (IP) cores have emerged as an integral part of modern system-on-chip (SoC) designs. However, IP vendors are facing major challenges to protect hardware IPs from IP piracy. This paper proposes a novel design methodology for hardware IP protection using netlist-level obfuscation. The proposed methodology can be integrated in the SoC design and manufacturing flow to simultaneously obfuscate and authenticate the design. Simulation results for a set of ISCAS-89 benchmark circuits and the advanced-encryption-standard IP core show that high levels of security can be achieved at less than 5% area and power overhead under delay constraint.",
"title": ""
},
{
"docid": "481f4a4b14d4594d8b023f9df074dfeb",
"text": "We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.",
"title": ""
},
{
"docid": "ac1018fb262f38faf50071603292c3c0",
"text": "This paper provides an overview and an evaluation of the Cetus source-to-source compiler infrastructure. The original goal of the Cetus project was to create an easy-to-use compiler for research in automatic parallelization of C programs. In meantime, Cetus has been used for many additional program transformation tasks. It serves as a compiler infrastructure for many projects in the US and internationally. Recently, Cetus has been supported by the National Science Foundation to build a community resource. The compiler has gone through several iterations of benchmark studies and implementations of those techniques that could improve the parallel performance of these programs. These efforts have resulted in a system that favorably compares with state-of-the-art parallelizers, such as Intel’s ICC. A key limitation of advanced optimizing compilers is their lack of runtime information, such as the program input data. We will discuss and evaluate several techniques that support dynamic optimization decisions. Finally, as there is an extensive body of proposed compiler analyses and transformations for parallelization, the question of the importance of the techniques arises. This paper evaluates the impact of the individual Cetus techniques on overall program performance.",
"title": ""
},
{
"docid": "066d22c1c5554bf32118baa331c64a88",
"text": "A center-fed, single-layer, planar antenna with unilateral radiation patterns is investigated. The antenna consists of a turnstile-shaped patch and a slotted ground plane, which function as a vertical magnetic dipole and a horizontal electric dipole, respectively. By combining the two orthogonal dipoles with the same radiation intensities and antiphases, unilateral patterns with wide beamwidth and high front-to-back (F/B) ratio are achieved. As the unilateral radiation pattern can be easily steered in the horizontal plane by changing the slot location, a pattern reconfigurable antenna is further designed by using p-i-n diodes to control the connection states of the radial slots on the ground plane. Four steerable beams are obtained, capable of covering the entire azimuthal plane. For demonstration, both the unilateral and pattern reconfigurable antennas operating at 2.4 GHz WLAN band (2.40–2.48 GHz) were fabricated and measured. The measured overlapping bandwidths, with $\\vert S_{11}\\vert <-10$ dB and F/B ratio >15 dB, are given by 7.0% (2.33–2.5 GHz) and 6.3% (2.32–2.47 GHz), respectively.",
"title": ""
},
{
"docid": "01594ac29e66b229dbfacd0e1a967e3c",
"text": "This article describes two approaches for computing the line-of-sight between objects in real terrain data. Our purpose is to find an efficient algorithm for combat elements in warfare simulation such as soldiers, troops, vehicles, ships, and aircrafts, thus allowing a simulated combat theater.",
"title": ""
},
{
"docid": "5a71d766ecd60b8973b965e53ef8ddfd",
"text": "An m-polar fuzzy model is useful for multi-polar information, multi-agent, multi-attribute and multiobject network models which gives more precision, flexibility, and comparability to the system as compared to the classical, fuzzy and bipolar fuzzy models. In this paper, m-polar fuzzy sets are used to introduce the notion of m-polar psi-morphism on product m-polar fuzzy graph (mFG). The action of this morphism is studied and established some results on weak and co-weak isomorphism. d2-degree and total d2-degree of a vertex in product mFG are defined and studied their properties. A real life situation has been modeled as an application of product mFG. c ©2018 World Academic Press, UK. All rights reserved.",
"title": ""
},
{
"docid": "907d5aa059ee85629ba0b2b131a9324a",
"text": "Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered.",
"title": ""
},
{
"docid": "ab9a65fda5a628b1042d1a31f3cf6188",
"text": "Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have better empirical performance compared to the state-of-theart methods. However, many questions regarding its theoretical performance remained open. In this paper, we design and analyze a generalization of Thompson Sampling algorithm for the stochastic contextual multi-armed bandit problem with linear payoff functions, when the contexts are provided by an adaptive adversary. This is among the most important and widely studied version of the contextual bandits problem. We prove a high probability regret bound of Õ( 2 √ T 1+ ) in time T for any 0 < < 1, where d is the dimension of each context vector and is a parameter used by the algorithm. Our results provide the first theoretical guarantees for the contextual version of Thompson Sampling, and are close to the lower bound of Ω(d √ T ) for this problem. This essentially solves a COLT open problem of Chapelle and Li [COLT 2012]. Proceedings of the 30 th International Conference on Machine Learning, Atlanta, Georgia, USA, 2013. JMLR: W&CP volume 28. Copyright 2013 by the author(s).",
"title": ""
},
{
"docid": "a21f04b6c8af0b38b3b41f79f2661fa6",
"text": "While Enterprise Architecture Management is an established and widely discussed field of interest in the context of information systems research, we identify a lack of work regarding quality assessment of enterprise architecture models in general and frameworks or methods on that account in particular. By analyzing related work by dint of a literature review in a design science research setting, we provide twofold contributions. We (i) suggest an Enterprise Architecture Model Quality Framework (EAQF) and (ii) apply it to a real world scenario. Keywords—Enterprise Architecture, model quality, quality framework, EA modeling.",
"title": ""
},
{
"docid": "abb06d560266ca1695f72e4d908cf6ea",
"text": "A simple photovoltaic (PV) system capable of operating in grid-connected mode and using multilevel boost converter (MBC) and line commutated inverter (LCI) has been developed for extracting the maximum power and feeding it to a single phase utility grid with harmonic reduction. Theoretical analysis of the proposed system is done and the duty ratio of the MBC is estimated for extracting maximum power from PV array. For a fixed firing angle of LCI, the proposed system is able to track the maximum power with the determined duty ratio which remains the same for all irradiations. This is the major advantage of the proposed system which eliminates the use of a separate maximum power point tracking (MPPT) Experiments have been conducted for feeding a single phase voltage to the grid. So by proper and simplified technique we are reducing the harmonics in the grid for unbalanced loads.",
"title": ""
},
{
"docid": "20be8363ae04659061a56a1c7d3ee4d5",
"text": "The popularity of level sets for segmentation is mainly based on the sound and convenient treatment of regions and their boundaries. Unfortunately, this convenience is so far not known from level set methods when applied to images with more than two regions. This communication introduces a comparatively simple way how to extend active contours to multiple regions keeping the familiar quality of the two-phase case. We further suggest a strategy to determine the optimum number of regions as well as initializations for the contours",
"title": ""
}
] |
scidocsrr
|
4c141bde61f2ba24bc6ce2fad718bb0a
|
Inter-media hashing for large-scale retrieval from heterogeneous data sources
|
[
{
"docid": "6228f059be27fa5f909f58fb60b2f063",
"text": "We propose a unified manifold learning framework for semi-supervised and unsupervised dimension reduction by employing a simple but effective linear regression function to map the new data points. For semi-supervised dimension reduction, we aim to find the optimal prediction labels F for all the training samples X, the linear regression function h(X) and the regression residue F0 = F - h(X) simultaneously. Our new objective function integrates two terms related to label fitness and manifold smoothness as well as a flexible penalty term defined on the residue F0. Our Semi-Supervised learning framework, referred to as flexible manifold embedding (FME), can effectively utilize label information from labeled data as well as a manifold structure from both labeled and unlabeled data. By modeling the mismatch between h(X) and F, we show that FME relaxes the hard linear constraint F = h(X) in manifold regularization (MR), making it better cope with the data sampled from a nonlinear manifold. In addition, we propose a simplified version (referred to as FME/U) for unsupervised dimension reduction. We also show that our proposed framework provides a unified view to explain and understand many semi-supervised, supervised and unsupervised dimension reduction techniques. Comprehensive experiments on several benchmark databases demonstrate the significant improvement over existing dimension reduction algorithms.",
"title": ""
},
{
"docid": "7655df3f32e6cf7a5545ae2231f71e7c",
"text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.",
"title": ""
}
] |
[
{
"docid": "133eccbb62434ad3444962dcf091226c",
"text": "We propose a novel multi-sensor system for accurate and power-efficient dynamic car-driver hand-gesture recognition, using a short-range radar, a color camera, and a depth camera, which together make the system robust against variable lighting conditions. We present a procedure to jointly calibrate the radar and depth sensors. We employ convolutional deep neural networks to fuse data from multiple sensors and to classify the gestures. Our algorithm accurately recognizes 10 different gestures acquired indoors and outdoors in a car during the day and at night. It consumes significantly less power than purely vision-based systems.",
"title": ""
},
{
"docid": "bb9fd3e54d8d5ce32147b437ed5f52d4",
"text": "OBJECTIVE\nTo assess the association between bullying (both directly and indirectly) and indicators of psychosocial health for boys and girls separately.\n\n\nSTUDY DESIGN\nA school-based questionnaire survey of bullying, depression, suicidal ideation, and delinquent behavior.\n\n\nSETTING\nPrimary schools in Amsterdam, The Netherlands.\n\n\nPARTICIPANTS\nA total of 4811 children aged 9 to 13.\n\n\nRESULTS\nDepression and suicidal ideation are common outcomes of being bullied in both boys and girls. These associations are stronger for indirect than direct bullying. After correction, direct bullying had a significant effect on depression and suicidal ideation in girls, but not in boys. Boy and girl offenders of bullying far more often reported delinquent behavior. Bullying others directly is a much greater risk factor for delinquent behavior than bullying others indirectly. This was true for both boys and girls. Boy and girl offenders of bullying also more often reported depressive symptoms and suicidal ideation. However, after correction for both sexes only a significant association still existed between bullying others directly and suicidal ideation.\n\n\nCONCLUSIONS\nThe association between bullying and psychosocial health differs notably between girls and boys as well as between direct and indirect forms of bullying. Interventions to stop bullying must pay attention to these differences to enhance effectiveness.",
"title": ""
},
{
"docid": "cc70efd881626a16ab23b9305e67adce",
"text": "Many different sciences have developed many different tests to describe and characterise spatial point data. For example, all the trees in a given area may be mapped such that their x, y co-ordinates and other variables, or ‘marks’, (e.g. species, size) might be recorded. Statistical techniques can be used to explore interactions between events at different length scales and interactions between different types of events in the same area. SpPack is a menu-driven add-in for Excel written in Visual Basic for Applications (VBA) that provides a range of statistical analyses for spatial point data. These include simple nearest-neighbour-derived tests and more sophisticated second-order statistics such as Ripley’s K-function and the neighbourhood density function (NDF). Some simple grid or quadrat-based statistics are also calculated. The application of the SpPack add-in is demonstrated for artificially generated event sets with known properties and for a multi-type ecological event set. 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d57f996ed29e1a91f6d0b04d5a83ea38",
"text": "Hashing is a popular approximate nearest neighbor search approach for large-scale image retrieval. Supervised hashing, which incorporates similarity/dissimilarity information on entity pairs to improve the quality of hashing function learning, has recently received increasing attention. However, in the existing supervised hashing methods for images, an input image is usually encoded by a vector of hand-crafted visual features. Such hand-crafted feature vectors do not necessarily preserve the accurate semantic similarities of images pairs, which may often degrade the performance of hashing function learning. In this paper, we propose a supervised hashing method for image retrieval, in which we automatically learn a good image representation tailored to hashing as well as a set of hash functions. The proposed method has two stages. In the first stage, given the pairwise similarity matrix S over training images, we propose a scalable coordinate descent method to decompose S into a product of HH where H is a matrix with each of its rows being the approximate hash code associated to a training image. In the second stage, we propose to simultaneously learn a good feature representation for the input images as well as a set of hash functions, via a deep convolutional network tailored to the learned hash codes in H and optionally the discrete class labels of the images. Extensive empirical evaluations on three benchmark datasets with different kinds of images show that the proposed method has superior performance gains over several state-of-the-art supervised and unsupervised hashing methods.",
"title": ""
},
{
"docid": "8be33fad66b25a9d3a4b05dbfc1aac5d",
"text": "A question-answering system needs to be able to reason about unobserved causes in order to answer questions of the sort that people face in everyday conversations. Recent neural network models that incorporate explicit memory and attention mechanisms have taken steps towards this capability. However, these models have not been tested in scenarios for which reasoning about the unobservable mental states of other agents is necessary to answer a question. We propose a new set of tasks inspired by the well-known false-belief test to examine how a recent question-answering model performs in situations that require reasoning about latent mental states. We find that the model is only successful when the training and test data bear substantial similarity, as it memorizes how to answer specific questions and cannot reason about the causal relationship between actions and latent mental states. We introduce an extension to the model that explicitly simulates the mental representations of different participants in a reasoning task, and show that this capacity increases the model’s performance on our theory of mind test.",
"title": ""
},
{
"docid": "f3a253dcae5127fcd4e62fd2508eef09",
"text": "ACC: allergic contact cheilitis Bronopol: 2-Bromo-2-nitropropane-1,3-diol MI: methylisothiazolinone MCI: methylchloroisothiazolinone INTRODUCTION Pediatric cheilitis can be a debilitating condition for the child and parents. Patch testing can help isolate allergens to avoid. Here we describe a 2-yearold boy with allergic contact cheilitis improving remarkably after prudent avoidance of contactants and food avoidance.",
"title": ""
},
{
"docid": "11bc0abc0aec11c1cf189eb23fd1be9d",
"text": "Web spamming describes behavior that attempts to deceive search engine’s ranking algorithms. TrustRank is a recent algorithm that can combat web spam by propagating trust among web pages. However, TrustRank propagates trust among web pages based on the number of outgoing links, which is also how PageRank propagates authority scores among Web pages. This type of propagation may be suited for propagating authority, but it is not optimal for calculating trust scores for demoting spam sites. In this paper, we propose several alternative methods to propagate trust on the web. With experiments on a real web data set, we show that these methods can greatly decrease the number of web spam sites within the top portion of the trust ranking. In addition, we investigate the possibility of propagating distrust among web pages. Experiments show that combining trust and distrust values can demote more spam sites than the sole use of trust values.",
"title": ""
},
{
"docid": "9095b7af97f9ff8a4258aa89b0ded6b6",
"text": "Data augmentation is the process of generating samples by transforming training data, with the target of improving the accuracy and robustness of classifiers. In this paper, we propose a new automatic and adaptive algorithm for choosing the transformations of the samples used in data augmentation. Specifically, for each sample, our main idea is to seek a small transformation that yields maximal classification loss on the transformed sample. We employ a trust-region optimization strategy, which consists of solving a sequence of linear programs. Our data augmentation scheme is then integrated into a Stochastic Gradient Descent algorithm for training deep neural networks. We perform experiments on two datasets, and show that that the proposed scheme outperforms random data augmentation algorithms in terms of accuracy and robustness, while yielding comparable or superior results with respect to existing selective sampling approaches.",
"title": ""
},
{
"docid": "bbc565d8cc780a1d68bf5384283f59db",
"text": "The physiological requirements of performing exercise above the anaerobic threshold are considerably more demanding than for lower work rates. Lactic acidosis develops at a metabolic rate that is specific to the individual and the task being performed. Although numerous pyruvate-dependent mechanisms can lead to an elevated blood lactate, the increase in lactate during muscular exercise is accompanied by an increase in lactate/pyruvate ratio (i.e., increased NADH/NAD ratio). This is typically caused by an inadequate O2 supply to the mitochondria. Thus, the anaerobic threshold can be considered to be an important assessment of the ability of the cardiovascular system to supply O2 at a rate adequate to prevent muscle anaerobiosis during exercise testing. In this paper, we demonstrate, with statistical justification, that the pattern of arterial lactate and lactate/pyruvate ratio increase during exercise evidences threshold dynamics rather than the continuous exponential increase proposed by some investigators. The pattern of change in arterial bicarbonate (HCO3-) and pulmonary gas exchange supports this threshold concept. To estimate the anaerobic threshold by gas exchange methods, we measure CO2 output (VCO2) as a continuous function of O2 uptake (VO2) (V-slope analysis) as work rate is increased. The break-point in this plot reflects the obligate buffering of increasing lactic acid production by HCO3-. The anaerobic threshold measured by the V-slope analysis appears to be a sensitive index of the development of metabolic acidosis even in subjects in whom other gas exchange indexes are insensitive, owing to irregular breathing, reduced chemoreceptor sensitivity, impaired respiratory mechanics, or all of these occurrences.",
"title": ""
},
{
"docid": "b7dec8c2a0ef689ef0cac1eb6ed76cc5",
"text": "One of the most difficult speech recognition tasks is accurate recognition of human to human communication. Advances in deep learning over the last few years have produced major speech recognition improvements on the representative Switchboard conversational corpus. Word error rates that just a few years ago were 14% have dropped to 8.0%, then 6.6% and most recently 5.8%, and are now believed to be within striking range of human performance. This then raises two issues what IS human performance, and how far down can we still drive speech recognition error rates? A recent paper by Microsoft suggests that we have already achieved human performance. In trying to verify this statement, we performed an independent set of human performance measurements on two conversational tasks and found that human performance may be considerably better than what was earlier reported, giving the community a significantly harder goal to achieve. We also report on our own efforts in this area, presenting a set of acoustic and language modeling techniques that lowered the word error rate of our own English conversational telephone LVCSR system to the level of 5.5%/10.3% on the Switchboard/CallHome subsets of the Hub5 2000 evaluation, which at least at the writing of this paper is a new performance milestone (albeit not at what we measure to be human performance!). On the acoustic side, we use a score fusion of three models: one LSTM with multiple feature inputs, a second LSTM trained with speaker-adversarial multitask learning and a third residual net (ResNet) with 25 convolutional layers and time-dilated convolutions. On the language modeling side, we use word and character LSTMs and convolutional WaveNet-style language models.",
"title": ""
},
{
"docid": "8b94a3040ee23fa3d4403b14b0f550e2",
"text": "Reactive programming has recently gained popularity as a paradigm that is well-suited for developing event-driven and interactive applications. It facilitates the development of such applications by providing abstractions to express time-varying values and automatically managing dependencies between such values. A number of approaches have been recently proposed embedded in various languages such as Haskell, Scheme, JavaScript, Java, .NET, etc. This survey describes and provides a taxonomy of existing reactive programming approaches along six axes: representation of time-varying values, evaluation model, lifting operations, multidirectionality, glitch avoidance, and support for distribution. From this taxonomy, we observe that there are still open challenges in the field of reactive programming. For instance, multidirectionality is supported only by a small number of languages, which do not automatically track dependencies between time-varying values. Similarly, glitch avoidance, which is subtle in reactive programs, cannot be ensured in distributed reactive programs using the current techniques.",
"title": ""
},
{
"docid": "fd7c877f16b4682b3b9d9de4b3e6b368",
"text": "We report on a wearable digital diary study of 26 participants that explores people’s daily authentication behavior across a wide range of targets (phones, PCs, websites, doors, cars, etc.) using a wide range of authenticators (passwords, PINs, physical keys, ID badges, fingerprints, etc.). Our goal is to gain an understanding of how much of a burden different kinds of authentication place on people, so that we can evaluate what kinds of improvements would most benefit them. We found that on average 25% of our participants’ authentications employed physical tokens such as car keys, which suggests that token-based authentication, in addition to password authentication, is a worthy area for improvement. We also found that our participants’ authentication behavior and opinions about authentication varied greatly, so any particular solution might not please everyone. We observed a surprisingly high (3–12%) false reject rate across many types of authentication. We present the design and implementation of the study itself, since wearable digital diary studies may prove useful for others exploring similar topics of human behavior. Finally, we provide an example use of participants’ logs of authentication events as simulation workloads for investigating the possible energy consumption of a “universal authentication” device.",
"title": ""
},
{
"docid": "ac3d9b8a93cb18449b76b2f2ef818d76",
"text": "Slotless brushless dc motors find more and more applications due to their high performance and their low production cost. This paper focuses on the windings inserted in the air gap of these motors and, in particular, to an original production technique that consists in printing them on a flexible printed circuit board. It theoretically shows that this technique, when coupled with an optimization of the winding shape, can improve the power density of about 23% compared with basic skewed and rhombic windings made of round wire. It also presents a first prototype of a winding realized using this technique and an experimental characterization aimed at identifying the importance and the origin of the differences between theory and practice.",
"title": ""
},
{
"docid": "cb6c4f97fcefa003e890c8c4a97ff34b",
"text": "When interacting and communicating with virtual agents in immersive environments, the agents’ behavior should be believable and authentic. Thereby, one important aspect is a convincing auralization of their speech. In this work-in-progress paper a study design to evaluate the effect of adding directivity to speech sound source on the perceived social presence of a virtual agent is presented. Therefore, we describe the study design and discuss first results of a prestudy as well as consequential improvements of the design.",
"title": ""
},
{
"docid": "08e5d41228c9c6700873e93b5cb7fa28",
"text": "We propose a novel approach for automatic segmentation of anatomical structures on 3D CT images by voting from a fully convolutional network (FCN), which accomplishes an end-to-end, voxel-wise multiple-class classification to map each voxel in a CT image directly to an anatomical label. The proposed method simplifies the segmentation of the anatomical structures (including multiple organs) in a CT image (generally in 3D) to majority voting for the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. An FCN consisting of “convolution” and “de-convolution” parts is trained and re-used for the 2D semantic image segmentation of different slices of CT scans. All of the procedures are integrated into a simple and compact all-in-one network, which can segment complicated structures on differently sized CT images that cover arbitrary CT scan regions without any adjustment. We applied the proposed method to segment a wide range of anatomical structures that consisted of 19 types of targets in the human torso, including all the major organs. A database consisting of 240 3D CT scans and a humanly annotated ground truth was used for training and testing. The results showed that the target regions for the entire set of CT test scans were segmented with acceptable accuracies (89 % of total voxels were labeled correctly) against the human annotations. The experimental results showed better efficiency, generality, and flexibility of this end-to-end learning approach on CT image segmentations comparing to conventional methods guided by human expertise.",
"title": ""
},
{
"docid": "9270af032d1adbf9829e7d723ff76849",
"text": "To detect illegal copies of copyrighted images, recent copy detection methods mostly rely on the bag-of-visual-words (BOW) model, in which local features are quantized into visual words for image matching. However, both the limited discriminability of local features and the BOW quantization errors will lead to many false local matches, which make it hard to distinguish similar images from copies. Geometric consistency verification is a popular technology for reducing the false matches, but it neglects global context information of local features and thus cannot solve this problem well. To address this problem, this paper proposes a global context verification scheme to filter false matches for copy detection. More specifically, after obtaining initial scale invariant feature transform (SIFT) matches between images based on the BOW quantization, the overlapping region-based global context descriptor (OR-GCD) is proposed for the verification of these matches to filter false matches. The OR-GCD not only encodes relatively rich global context information of SIFT features but also has good robustness and efficiency. Thus, it allows an effective and efficient verification. Furthermore, a fast image similarity measurement based on random verification is proposed to efficiently implement copy detection. In addition, we also extend the proposed method for partial-duplicate image detection. Extensive experiments demonstrate that our method achieves higher accuracy than the state-of-the-art methods, and has comparable efficiency to the baseline method based on the BOW quantization.",
"title": ""
},
{
"docid": "e9e5ab2607d39a9977136e3b82fd706d",
"text": "BACKGROUND\nChildren and adolescents with autistic spectrum disorder (ASD) presenting with significant limitations in conventional forms of verbal and non-verbal communication are found to respond positively to music therapy intervention involving both active, improvizational methods and receptive music therapy approaches. Improvizational musical activity with therapeutic objectives and outcomes has been found to facilitate motivation, communication skills and social interaction, as well as sustaining and developing attention. The structure and predictability found in music assist in reciprocal interaction, from which tolerance, flexibility and social engagement to build relationships emerge, relying on a systematic approach to promote appropriate and meaningful interpersonal responses.\n\n\nRESULTS\nPublished reports of the value and effectiveness of music therapy as an intervention for children with ASD range from controlled studies to clinical case reports. Further documentation has emphasized the role music therapy plays in diagnostic and clinical assessment. Music therapy assessment can identify limitations and weaknesses in children, as well as strengths and potentials. Research evidence from a systematic review found two randomized controlled trials that examined short-term effects of structured music therapy intervention. Significant effects were found in these studies even with extremely small samples, and the findings are important because they demonstrate the potential of the medium of music for autistic children. Case series studies were identified that examined the effects of improvizational music therapy where communicative behaviour, language development, emotional responsiveness, attention span and behavioural control improved over the course of an intervention of improvizational music therapy.",
"title": ""
},
{
"docid": "01f29e15732a48949b41a193073bcbe3",
"text": "Annotation is the process of adding semantic metadata to resources so that data becomes more meaningful. Creating additional metadata by document annotation is considered one of the main techniques that make machines understand and deal with data on the web. Our paper presents a semantic framework that annotates RSS News Feeds. Semantic Annotation for News Feeds Frameworks (SANF) aims to enrich web content by using semantic metadata easily which facilitates searching and adding of web content.",
"title": ""
},
{
"docid": "7b6c039783091260cee03704ce9748d8",
"text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.",
"title": ""
},
{
"docid": "5dc4dfc2d443c31332c70a56c2d70c7d",
"text": "Sentiment analysis or opinion mining is an important type of text analysis that aims to support decision making by extracting and analyzing opinion oriented text, identifying positive and negative opinions, and measuring how positively or negatively an entity (i.e., people, organization, event, location, product, topic, etc.) is regarded. As more and more users express their political and religious views on Twitter, tweets become valuable sources of people's opinions. Tweets data can be efficiently used to infer people's opinions for marketing or social studies. This paper proposes a Tweets Sentiment Analysis Model (TSAM) that can spot the societal interest and general people's opinions in regard to a social event. In this paper, Australian federal election 2010 event was taken as an example for sentiment analysis experiments. We are primarily interested in the sentiment of the specific political candidates, i.e., two primary minister candidates - Julia Gillard and Tony Abbot. Our experimental results demonstrate the effectiveness of the system.",
"title": ""
}
] |
scidocsrr
|
26fac8339347eb67742152acbab3915d
|
Automatic Medical Image Classification and Abnormality Detection Using K-Nearest Neighbour
|
[
{
"docid": "4b40fcd6df5403738cabb5f243588d31",
"text": "We purpose a hybrid approach for classification of brain tissues in magnetic resonance images (MRI) based on genetic algorithm (GA) and support vector machine (SVM). A wavelet based texture feature set is derived. The optimal texture features are extracted from normal and tumor regions by using spatial gray level dependence method (SGLDM). These features are given as input to the SVM classifier. The choice of features, which constitute a big problem in classification techniques, is solved by using GA. These optimal features are used to classify the brain tissues into normal, benign or malignant tumor. The performance of the algorithm is evaluated on a series of brain tumor images.",
"title": ""
}
] |
[
{
"docid": "804cee969d47d912d8bdc40f3a3eeb32",
"text": "The problem of matching a forensic sketch to a gallery of mug shot images is addressed in this paper. Previous research in sketch matching only offered solutions to matching highly accurate sketches that were drawn while looking at the subject (viewed sketches). Forensic sketches differ from viewed sketches in that they are drawn by a police sketch artist using the description of the subject provided by an eyewitness. To identify forensic sketches, we present a framework called local feature-based discriminant analysis (LFDA). In LFDA, we individually represent both sketches and photos using SIFT feature descriptors and multiscale local binary patterns (MLBP). Multiple discriminant projections are then used on partitioned vectors of the feature-based representation for minimum distance matching. We apply this method to match a data set of 159 forensic sketches against a mug shot gallery containing 10,159 images. Compared to a leading commercial face recognition system, LFDA offers substantial improvements in matching forensic sketches to the corresponding face images. We were able to further improve the matching performance using race and gender information to reduce the target gallery size. Additional experiments demonstrate that the proposed framework leads to state-of-the-art accuracys when matching viewed sketches.",
"title": ""
},
{
"docid": "e870d5f8daac0d13bdcffcaec4ba04c1",
"text": "In this paper the design, fabrication and test of X-band and 2-18 GHz wideband high power SPDT MMIC switches in microstrip GaN technology are presented. Such switches have demonstrated state-of-the-art performances. In particular the X-band switch exhibits 1 dB insertion loss, better than 37 dB isolation and a power handling capability at 9 GHz of better than 39 dBm at 1 dB insertion loss compression point; the wideband switch has an insertion loss lower than 2.2 dB, better than 25 dB isolation and a power handling capability of better than 38 dBm in the entire bandwidth.",
"title": ""
},
{
"docid": "9973dab94e708f3b87d52c24b8e18672",
"text": "We show that two popular discounted reward natural actor-critics, NAC-LSTD and eNAC, follow biased estimates of the natural policy gradient. We derive the first unbiased discounted reward natural actor-critics using batch and iterative approaches to gradient estimation and prove their convergence to globally optimal policies for discrete problems and locally optimal policies for continuous problems. Finally, we argue that the bias makes the existing algorithms more appropriate for the average reward setting.",
"title": ""
},
{
"docid": "b0910231c6ea05c320c584e34fa41512",
"text": "Energy optimization is important aspect of smart gird (SG). SG integrates communication and information technology in traditional grid. In SG there is two-way communication between consumer and utility. It includes smart meter, Energy Management Controller (EMC) and smart appliances. Users can shift load from on peak hours to off peak hours by adapting Demand Side Management (DSM) strategies, which effectively reduce electricity cost. The objectives of this paper are the minimization of power consumption, electricity cost, reduction of Peak to Average Ratio (PAR) using Enhanced Differential Evolution (EDE) and Chicken Swarm Optimization (CSO) algorithms. For the calculation of cost Critical Peak Pricing (CPP) is used. The simulations result show that proposed schemes reduce electricity cost, reduce power consumption and PAR.",
"title": ""
},
{
"docid": "723f7d157cacfcad4523f7544a9d1c77",
"text": "The superiority of deeply learned pedestrian representations has been reported in very recent literature of person re-identification (re-ID). In this article, we consider the more pragmatic issue of learning a deep feature with no or only a few labels. We propose a progressive unsupervised learning (PUL) method to transfer pretrained deep representations to unseen domains. Our method is easy to implement and can be viewed as an effective baseline for unsupervised re-ID feature learning. Specifically, PUL iterates between (1) pedestrian clustering and (2) fine-tuning of the convolutional neural network (CNN) to improve the initialization model trained on the irrelevant labeled dataset. Since the clustering results can be very noisy, we add a selection operation between the clustering and fine-tuning. At the beginning, when the model is weak, CNN is fine-tuned on a small amount of reliable examples that locate near to cluster centroids in the feature space. As the model becomes stronger, in subsequent iterations, more images are being adaptively selected as CNN training samples. Progressively, pedestrian clustering and the CNN model are improved simultaneously until algorithm convergence. This process is naturally formulated as self-paced learning. We then point out promising directions that may lead to further improvement. Extensive experiments on three large-scale re-ID datasets demonstrate that PUL outputs discriminative features that improve the re-ID accuracy. Our code has been released at https://github.com/hehefan/Unsupervised-Person-Re-identification-Clustering-and-Fine-tuning.",
"title": ""
},
{
"docid": "72977e6d2c3601519b4927c76e376fd1",
"text": "PURPOSE OF REVIEW\nNutritional insufficiencies of nutrients such as omega-3 highly unsaturated fatty acids (HUFAs), vitamins and minerals have been linked to suboptimal developmental outcomes including attention deficit hyperactivity disorder (ADHD). Although the predominant treatment is currently psychostimulant medications, randomized clinical trials with omega-3 HUFAs have reported small-to-modest effects in reducing symptoms of ADHD in children despite arguable individual methodological and design misgivings.\n\n\nRECENT FINDINGS\nThis review presents, discusses and critically evaluates data and findings from meta-analytic and systematic reviews and clinical trials published within the last 12 months. Recent trajectories of this research are discussed, such as comparing eicosapentaenoic acid and docosahexaenoic acid and testing the efficacy of omega-3 HUFAs as an adjunct to methylphenidate. Discussion includes highlighting limitations and potential future directions such as addressing variable findings by accounting for other nutritional deficiencies and behavioural food intolerances.\n\n\nSUMMARY\nThe authors conclude that given the current economic burden of ADHD, estimated in the region of $77 billion in the USA alone, in addition to the fact that a proportion of patients with ADHD are either treatment resistant, nonresponders or withdraw from medication because of adverse side-effects, the investigation of nonpharmacological interventions including omega-3 HUFAs in clinical practice warrants extrapolating.",
"title": ""
},
{
"docid": "7d507a0b754a8029d28216e795cb7286",
"text": "a Lake Michigan Field Station/Great Lakes Environmental Research Laboratory/NOAA, 1431 Beach St, Muskegon, MI 49441, USA b Great Lakes Environmental Research Laboratory/NOAA, 4840 S. State Rd., Ann Arbor, MI 48108, USA c School Forest Resources, Pennsylvania State University, 434 Forest Resources Building, University Park, PA 16802, USA d School of Natural Resources and Environment, University of Michigan, 440 Church St., Ann Arbor, MI 48109, USA",
"title": ""
},
{
"docid": "dde768e5944f1ce8c0a68b4cc42eaf81",
"text": "The problem of aspect-based sentiment analysis deals with classifying sentiments (negative, neutral, positive) for a given aspect in a sentence. A traditional sentiment classification task involves treating the entire sentence as a text document and classifying sentiments based on all the words. Let us assume, we have a sentence such as ”the acceleration of this car is fast, but the reliability is horrible”. This can be a difficult sentence because it has two aspects with conflicting sentiments about the same entity. Considering machine learning techniques (or deep learning), how do we encode the information that we are interested in one aspect and its sentiment but not the other? Let us explore various pre-processing steps, features, and methods used to facilitate in solving this task.",
"title": ""
},
{
"docid": "2271dd42ca1f9682dc10c9832387b55f",
"text": "People who score low on a performance test overestimate their own performance relative to others, whereas high scorers slightly underestimate their own performance. J. Kruger and D. Dunning (1999) attributed these asymmetric errors to differences in metacognitive skill. A replication study showed no evidence for mediation effects for any of several candidate variables. Asymmetric errors were expected because of statistical regression and the general better-than-average (BTA) heuristic. Consistent with this parsimonious model, errors were no longer asymmetric when either regression or the BTA effect was statistically removed. In fact, high rather than low performers were more error prone in that they were more likely to neglect their own estimates of the performance of others when predicting how they themselves performed relative to the group.",
"title": ""
},
{
"docid": "c15aa2444187dffe2be4636ad00babdd",
"text": "Most people have become “big data” producers in their daily life. Our desires, opinions, sentiments, social links as well as our mobile phone calls and GPS track leave traces of our behaviours. To transform these data into knowledge, value is a complex task of data science. This paper shows how the SoBigData Research Infrastructure supports data science towards the new frontiers of big data exploitation. Our research infrastructure serves a large community of social sensing and social mining researchers and it reduces the gap between existing research centres present at European level. SoBigData integrates resources and creates an infrastructure where sharing data and methods among text miners, visual analytics researchers, socio-economic scientists, network scientists, political scientists, humanities researchers can indeed occur. The main concepts related to SoBigData Research Infrastructure are presented. These concepts support virtual and transnational (on-site) access to the resources. Creating and supporting research communities are considered to be of vital importance for the success of our research infrastructure, as well as contributing to train the new generation of data scientists. Furthermore, this paper introduces the concept of exploratory and shows their role in the promotion of the use of our research infrastructure. The exploratories presented in this paper represent also a set of real applications in the context of social mining. Finally, a special attention is given to the legal and ethical aspects. Everything in SoBigData is supervised by an ethical and legal framework.",
"title": ""
},
{
"docid": "463b44262823b80b17466470714ded59",
"text": "Character recognition for cursive script like Arabic, handwritten English and French is a challenging task which becomes more complicated for Urdu Nasta’liq text due to complexity of this script over Arabic. Recurrent neural network (RNN) has proved excellent performance for English, French as well as cursive Arabic script due to sequence learning property. Most of the recent approaches perform segmentation-based character recognition, whereas, due to the complexity of the Nasta’liq script, segmentation error is quite high as compared to Arabic Naskh script. RNN has provided promising results in such scenarios. In this paper, we achieved high accuracy for Urdu Nasta’liq using statistical features and multi-dimensional long short-term memory. We present a robust feature extraction approach that extracts feature based on right-to-left sliding window. Results showed that selected features significantly reduce the label error. For evaluation purposes, we have used Urdu printed text images dataset and compared the proposed approach with the recent work. The system provided 94.97 % recognition accuracy for unconstrained printed Nasta’liq text lines and outperforms the state-of-the-art results.",
"title": ""
},
{
"docid": "45ce4800412789dba8f8cd7ac3fe983b",
"text": "Metamorphic malware is a kind of malware which evades signature-based anti-viruses by changing its internal structure in each infection. This paper, firstly, introduces a new measure of distance between two computer programs called program dissimilarity measure based on entropy (PDME). Then, it suggests a measure for the degree of metamorphism, based on the suggested distance measure. The distance measure is defined based on the Entropy of the two malware programs. Moreover, the paper shows that the distance measure can be used for classifying metamorphic malware via K-Nearest Neighbors (KNN) method. The method is evaluated by four metamorphic malware families. The results demonstrate that the measure can indicate the degree of metamorphism efficiently, and the KNN classification method using PDME can classify the metamorphic malware with a high precision.",
"title": ""
},
{
"docid": "f22c14a8fa1f5cb28604bbb7012a41e4",
"text": "The authors support the hypothesis that a causative agent in Parkinson's disease (PD) might be either fungus or bacteria with fungus-like properties - Actinobacteria, and that their spores may serve as 'infectious agents'. Updated research and the epidemiology of PD suggest that the disease might be induced by environmental factor(s), possibly with genetic susceptibility, and that α-synuclein probably should be regarded as part of the body's own defense mechanism. To explain the dual-hit theory with stage 1 involvement of the olfactory structures and the 'gut-brain'-axis, the environmental factor is probably airborne and quite 'robust' entering the body via the nose/mouth, then to be swallowed reaching the enteric nervous system with retained pathogenicity. Similar to the essence of smoking food, which is to eradicate microorganisms, a viable agent may be defused by tobacco smoke. Hence, the agent is likely to be a 'living' and not an inert agent. Furthermore, and accordant with the age-dependent incidence of LPD, this implies that a dormant viable agent have been escorted by α-synuclein via retrograde axonal transport from the nose and/or GI tract to hibernate in the associated cerebral nuclei. In the brain, PD spreads like a low-grade infection, and that patients develop symptoms in later life, indicate a relatively long incubation time. Importantly, Actinomyces species may form endospores, the hardiest known form of life on Earth. The authors hypothesize that certain spores may not be subject to degradation by macroautophagy, and that these spores become reactivated due to the age-dependent or genetic reduced macroautophagic function. Hence, the hibernating spore hypothesis explains both early-onset and late-onset PD. Evaluation of updated available information are all consistent with the hypothesis that PD may be induced by spores from fungi or Actinobacteria and thus supports Broxmeyer's hypothesis put forward 15years ago.",
"title": ""
},
{
"docid": "d5bd7400d4b7e34cbf7af863df5f9935",
"text": "Fine-grained categorisation has been a challenging problem due to small inter-class variation, large intra-class variation and low number of training images. We propose a learning system which first clusters visually similar classes and then learns deep convolutional neural network features specific to each subset. Experiments on the popular fine-grained Caltech-UCSD bird dataset show that the proposed method outperforms recent fine-grained categorisation methods under the most difficult setting: no bounding boxes are presented at test time. It achieves a mean accuracy of 77.5%, compared to the previous best performance of 73.2%. We also show that progressive transfer learning allows us to first learn domain-generic features (for bird classification) which can then be adapted to specific set of bird classes, yielding improvements in accuracy.",
"title": ""
},
{
"docid": "cea0f4b7409729fd310024d2e9a31b71",
"text": "Relative ranging between Wireless Sensor Network (WSN) nod es is considered to be an important requirement for a number of dis tributed applications. This paper focuses on a two-way, time of flight (ToF) te chnique which achieves good accuracy in estimating the point-to-point di s ance between two wireless nodes. The underlying idea is to utilize a two-way t ime transfer approach in order to avoid the need for clock synchronization b etween the participating wireless nodes. Moreover, by employing multipl e ToF measurements, sub-clock resolution is achieved. A calibration stage is us ed to estimate the various delays that occur during a message exchange and require subtraction from the initial timed value. The calculation of the range betwee n the nodes takes place on-node making the proposed scheme suitable for distribute d systems. Care has been taken to exclude the erroneous readings from the set of m easurements that are used in the estimation of the desired range. The two-way T oF technique has been implemented on commercial off-the-self (COTS) device s without the need for additional hardware. The system has been deployed in var ous experimental locations both indoors and outdoors and the obtained result s reveal that accuracy between 1m RMS and 2.5m RMS in line-of-sight conditions over a 42m range can be achieved.",
"title": ""
},
{
"docid": "f2f95f70783be5d5ee1260a3c5b9d892",
"text": "Information Extraction is the process of automatically obtaining knowledge from plain text. Because of the ambiguity of written natural language, Information Extraction is a difficult task. Ontology-based Information Extraction (OBIE) reduces this complexity by including contextual information in the form of a domain ontology. The ontology provides guidance to the extraction process by providing concepts and relationships about the domain. However, OBIE systems have not been widely adopted because of the difficulties in deployment and maintenance. The Ontology-based Components for Information Extraction (OBCIE) architecture has been proposed as a form to encourage the adoption of OBIE by promoting reusability through modularity. In this paper, we propose two orthogonal extensions to OBCIE that allow the construction of hybrid OBIE systems with higher extraction accuracy and a new functionality. The first extension utilizes OBCIE modularity to integrate different types of implementation into one extraction system, producing a more accurate extraction. For each concept or relationship in the ontology, we can select the best implementation for extraction, or we can combine both implementations under an ensemble learning schema. The second extension is a novel ontology-based error detection mechanism. Following a heuristic approach, we can identify sentences that are logically inconsistent with the domain ontology. Because the implementation strategy for the extraction of a concept is independent of the functionality of the extraction, we can design a hybrid OBIE system with concepts utilizing different implementation strategies for extracting correct or incorrect sentences. Our evaluation shows that, in the implementation extension, our proposed method is more accurate in terms of correctness and completeness of the extraction. Moreover, our error detection method can identify incorrect statements with a high accuracy.",
"title": ""
},
{
"docid": "6952a28e63c231c1bfb43391a21e80fd",
"text": "Deep learning has attracted tremendous attention from researchers in various fields of information engineering such as AI, computer vision, and language processing [Kalchbrenner and Blunsom, 2013; Krizhevsky et al., 2012; Mnih et al., 2013], but also from more traditional sciences such as physics, biology, and manufacturing [Anjos et al., 2015; Baldi et al., 2014; Bergmann et al., 2014]. Neural networks, image processing tools such as convolutional neural networks, sequence processing models such as recurrent neural networks, and regularisation tools such as dropout, are used extensively. However, fields such as physics, biology, and manufacturing are ones in which representing model uncertainty is of crucial importance [Ghahramani, 2015; Krzywinski and Altman, 2013]. With the recent shift in many of these fields towards the use of Bayesian uncertainty [Herzog and Ostwald, 2013; Nuzzo, 2014; Trafimow and Marks, 2015], new needs arise from deep learning. In this work we develop tools to obtain practical uncertainty estimates in deep learning, casting recent deep learning tools as Bayesian models without changing either the models or the optimisation. In the first part of this thesis we develop the theory for such tools, providing applications and illustrative examples. We tie approximate inference in Bayesian models to dropout and other stochastic regularisation techniques, and assess the approximations empirically. We give example applications arising from this connection between modern deep learning and Bayesian modelling such as active learning of image data and data efficient deep reinforcement learning. We further demonstrate the method’s practicality through a survey of recent applications making use of the suggested tools in language applications, medical diagnostics, bioinformatics, image processing, and autonomous driving. In the second part of the thesis we explore its theoretical implications, and the insights stemming from the link between Bayesian modelling and deep learning. We discuss what determines model uncertainty properties, analyse the approximate inference analytically in the linear case, and theoretically examine various priors such as spike and slab priors.",
"title": ""
},
{
"docid": "60c03017f7254c28ba61348d301ae612",
"text": "Code flaws or vulnerabilities are prevalent in software systems and can potentially cause a variety of problems including deadlock, information loss, or system failure. A variety of approaches have been developed to try and detect the most likely locations of such code vulnerabilities in large code bases. Most of them rely on manually designing features (e.g. complexity metrics or frequencies of code tokens) that represent the characteristics of the code. However, all suffer from challenges in sufficiently capturing both semantic and syntactic representation of source code, an important capability for building accurate prediction models. In this paper, we describe a new approach, built upon the powerful deep learning Long Short Term Memory model, to automatically learn both semantic and syntactic features in code. Our evaluation on 18 Android applications demonstrates that the prediction power obtained from our learned features is equal or even superior to what is achieved by state of the art vulnerability prediction models: 3%–58% improvement for within-project prediction and 85% for cross-project prediction.",
"title": ""
},
{
"docid": "8100a2c4f775d5e64b655de7835f946b",
"text": "primary challenge in responding to both natural and man-made disasters is communication. This has been highlighted by recent disasters such as the 9/11 terrorist attacks and Hurricane Katrina [2, 5, 6]. A problem frequently cited by responders is the lack of radio interoperability. Responding organizations must work in concert to form a cohesive plan of response. However, each group—fire, police, SWAT, HazMat—com-municates with radios set to orthogonal frequencies , making inter-agency communications extremely difficult. The problem is compounded as more local, state, and federal agencies become involved. The communication challenges in emergency response go far beyond simple interop-erability issues. Based on our research, practical observation of first responder exercises and drills, and workshop discussions, we have identified three categories of communication challenges: technological, sociological, and organizational. These three major areas are key to developing and maintaining healthy and effective disaster communication systems. The primary technological challenge after a disaster is rapid deployment of communication systems for first responders and disaster management workers. This is true regardless of whether the communications network has been completely destroyed (power, telephone, and/or network connectivity infrastructure), or, as in the case of some remote geographic areas, the infrastructure was previously nonex-istent. Deployment of a new system is more complicated in areas where partial communication infrastructures remain, than where no prior communication networks existed. This can be due to several factors including interference from existing partial communication networks and the dependency of people on their prior systems. Another important obstacle to overcome is the multi-organizational radio interoperability issue. To make future communication systems capable of withstanding large-or medium-scale disasters, two technological solutions can be incorporated into the design: dual-use technology and built-in architectural and protocol redundancy. Dual-use technology would enable both normal and emergency operational modes. During crises, such devices would work in a network-controlled fashion, achieved using software agents within the communication",
"title": ""
},
{
"docid": "b07ae3888b52faa598893bbfbf04eae2",
"text": "This paper presents a compliant locomotion framework for torque-controlled humanoids using model-based whole-body control. In order to stabilize the centroidal dynamics during locomotion, we compute linear momentum rate of change objectives using a novel time-varying controller for the Divergent Component of Motion (DCM). Task-space objectives, including the desired momentum rate of change, are tracked using an efficient quadratic program formulation that computes optimal joint torque setpoints given frictional contact constraints and joint position / torque limits. In order to validate the effectiveness of the proposed approach, we demonstrate push recovery and compliant walking using THOR, a 34 DOF humanoid with series elastic actuation. We discuss details leading to the successful implementation of optimization-based whole-body control on our hardware platform, including the design of a “simple” joint impedance controller that introduces inner-loop velocity feedback into the actuator force controller.",
"title": ""
}
] |
scidocsrr
|
c991a433de227c0ae2819cf76fc9570d
|
Deep Neural Networks Segment Neuronal Membranes in Electron Microscopy Images
|
[
{
"docid": "7135891f34d0976c514fbfc0686105f9",
"text": "We have measured the spatial density of cones and rods in eight whole-mounted human retinas, obtained from seven individuals between 27 and 44 years of age, and constructed maps of photoreceptor density and between-individual variability. The average human retina contains 4.6 million cones (4.08-5.29 million). Peak foveal cone density averages 199,000 cones/mm2 and is highly variable between individuals (100,000-324,000 cones/mm2). The point of highest density may be found in an area as large as 0.032 deg2. Cone density falls steeply with increasing eccentricity and is an order of magnitude lower 1 mm away from the foveal center. Superimposed on this gradient is a streak of high cone density along the horizontal meridian. At equivalent eccentricities, cone density is 40-45% higher in nasal compared to temporal retina and slightly higher in midperipheral inferior compared to superior retina. Cone density also increases slightly in far nasal retina. The average human retina contains 92 million rods (77.9-107.3 million). In the fovea, the average horizontal diameter of the rod-free zone is 0.350 mm (1.25 degrees). Foveal rod density increases most rapidly superiorly and least rapidly nasally. The highest rod densities are located along an elliptical ring at the eccentricity of the optic disk and extending into nasal retina with the point of highest density typically in superior retina (5/6 eyes). Rod densities decrease by 15-25% where the ring crosses the horizontal meridian. Rod density declines slowly from the rod ring to the far periphery and is highest in nasal and superior retina. Individual variability in photoreceptor density differs with retinal region and is similar for both cones and rods. Variability is highest near the fovea, reaches a minimum in the midperiphery, and then increases with eccentricity to the ora serrata. The total number of foveal cones is similar for eyes with widely varying peak cone density, consistent with the idea that the variability reflects differences in the lateral migration of photoreceptors during development. Two fellow eyes had cone and rod numbers within 8% and similar but not identical photoreceptor topography.",
"title": ""
}
] |
[
{
"docid": "059e8e43e6e57565e2aa319c1d248a3b",
"text": "BACKGROUND\nWhile depression is known to involve a disturbance of mood, movement and cognition, its associated cognitive deficits are frequently viewed as simple epiphenomena of the disorder.\n\n\nAIMS\nTo review the status of cognitive deficits in depression and their putative neurobiological underpinnings.\n\n\nMETHOD\nSelective computerised review of the literature examining cognitive deficits in depression and their brain correlates.\n\n\nRESULTS\nRecent studies report both mnemonic deficits and the presence of executive impairment--possibly selective for set-shifting tasks--in depression. Many studies suggest that these occur independent of age, depression severity and subtype, task 'difficulty', motivation and response bias: some persist upon clinical 'recovery'.\n\n\nCONCLUSIONS\nMnemonic and executive deficits do no appear to be epiphenomena of depressive disorder. A focus on the interactions between motivation, affect and cognitive function may allow greater understanding of the interplay between key aspects of the dorsal and ventral aspects of the prefrontal cortex in depression.",
"title": ""
},
{
"docid": "7797523750404879fb6ed025e24144e2",
"text": "This paper presents a retrospective of electric motor developments in General Motors (GM) for electric vehicle (EV), hybrid electric vehicle (HEV), plug-in hybrid electric vehicle (PHEV), and fuel cell electric vehicle (FCEV) production programs. This paper includes i) the progression of electric motor stator and rotor design methodologies that gradually improved motor torque, power, and efficiency performance while mitigating for noise, ii) Heavy rare earth (HRE) mitigation in subsequent design to lower cost and supply uncertainty, iii) Design techniques to lower torque ripple and radial force to mitigate noise and vibration issues. These techniques are elaborated in details with design examples, simulation and test data.",
"title": ""
},
{
"docid": "e1366b0128c4d76addd57bb2b02a19b5",
"text": "OBJECTIVE\nThe present study examined the association between child sexual abuse (CSA) and sexual health outcomes in young adult women. Maladaptive coping strategies and optimism were investigated as possible mediators and moderators of this relationship.\n\n\nMETHOD\nData regarding sexual abuse, coping, optimism and various sexual health outcomes were collected using self-report and computerized questionnaires with a sample of 889 young adult women from the province of Quebec aged 20-23 years old.\n\n\nRESULTS\nA total of 31% of adult women reported a history of CSA. Women reporting a severe CSA were more likely to report more adverse sexual health outcomes including suffering from sexual problems and engaging in more high-risk sexual behaviors. CSA survivors involving touching only were at greater risk of reporting more negative sexual self-concept such as experiencing negative feelings during sex than were non-abused participants. Results indicated that emotion-oriented coping mediated outcomes related to negative sexual self-concept while optimism mediated outcomes related to both, negative sexual self-concept and high-risk sexual behaviors. No support was found for any of the proposed moderation models.\n\n\nCONCLUSIONS\nSurvivors of more severe CSA are more likely to engage in high-risk sexual behaviors that are potentially harmful to their health as well as to experience more sexual problems than women without a history of sexual victimization. Personal factors, namely emotion-oriented coping and optimism, mediated some sexual health outcomes in sexually abused women. The results suggest that maladaptive coping strategies and optimism regarding the future may be important targets for interventions optimizing sexual health and sexual well-being in CSA survivors.",
"title": ""
},
{
"docid": "e8db06439dc533e0dd24e0920feb70c9",
"text": "Today, vehicles are increasingly being connected to the Internet of Things which enable them to provide ubiquitous access to information to drivers and passengers while on the move. However, as the number of connected vehicles keeps increasing, new requirements (such as seamless, secure, robust, scalable information exchange among vehicles, humans, and roadside infrastructures) of vehicular networks are emerging. In this context, the original concept of vehicular ad-hoc networks is being transformed into a new concept called the Internet of Vehicles (IoV). We discuss the benefits of IoV along with recent industry standards developed to promote its implementation. We further present recently proposed communication protocols to enable the seamless integration and operation of the IoV. Finally, we present future research directions of IoV that require further consideration from the vehicular research community.",
"title": ""
},
{
"docid": "0669be9b2ae3e29491d5e62804537d80",
"text": "The core of any system of economic theory is the explanation of how prices are determined. As Mises (1998, p. 235) himself put it, “Economics is mainly concerned with the analysis of the determination of money prices of goods and services exchanged on the market.” Thus, the core of Human Action is parts three and four (pp. 201–684), entitled, respectively, “Economic Calculation” and “Catallactics or Economics of the Market Society.” In these two parts, comprising 484 pages, there is presented for the first time a complete and systematic theory of how actual market prices are determined. Of course, Mises did not create this theory out of whole cloth. In fact, the theory of price elaborated in Human Action represents the crowning achievement of the Austrian School of economics. It is the culmination of the approach to price theory originated by Carl Menger in 1871 and developed further by a handful of brilliant economists of the generation intervening between Menger and Mises. These latter included especially Eugen von Böhm-Bawerk, J.B. Clark, Phillip H. Wicksteed, Frank A. Fetter, and Herbert J. Davenport. Unfortunately, for reasons to be explained below, the entire Mengerian approach went into decline after World War I and had lapsed into nearly complete dormancy by the mid-1930s. Mises’s outstanding contribution in Human Action was to singlehandedly revive this approach and elaborate it into a coherent and systematic theory of price determination. This article is divided into sections, section 1 describes the development of the Mengerian approach to price theory up until World War I, by which time it had reached the zenith of its international influence. Section 2 describes its amazingly rapid decline and suggests four reasons for it, including two fundamental theoretical",
"title": ""
},
{
"docid": "ab400c41db805b1574e8db80f72e47bd",
"text": "Radiation from printed millimeter-wave antennas integrated in mobile terminals is affected by surface currents on chassis, guided waves trapped in dielectric layers, superstrates, and the user’s hand, making mobile antenna design for 5G communication challenging. In this paper, four canonical types of printed 28-GHz antenna elements are integrated in a 5G mobile terminal mock-up. Different kinds of terminal housing effects are examined separately, and the terminal housing effects are also diagnosed through equivalent currents by using the inverse source technique. To account for the terminal housing effects on a beam-scanning antenna subarray, we propose the effective beam-scanning efficiency to evaluate its coverage performance. This paper presents the detailed analysis, results, and new concepts regarding the terminal housing effects, and thereby provides valuable insight into the practical 5G mobile antenna design and radiation performance characterization.",
"title": ""
},
{
"docid": "23eb737d3930862326f81bac73c5e7f5",
"text": "O discussion communities have become a widely used medium for interaction, enabling conversations across a broad range of topics and contexts. Their success, however, depends on participants’ willingness to invest their time and attention in the absence of formal role and control structures. Why, then, would individuals choose to return repeatedly to a particular community and engage in the various behaviors that are necessary to keep conversation within the community going? Some studies of online communities argue that individuals are driven by self-interest, while others emphasize more altruistic motivations. To get beyond these inconsistent explanations, we offer a model that brings dissimilar rationales into a single conceptual framework and shows the validity of each rationale in explaining different online behaviors. Drawing on typologies of organizational commitment, we argue that members may have psychological bonds to a particular online community based on (a) need, (b) affect, and/or (c) obligation. We develop hypotheses that explain how each form of commitment to a community affects the likelihood that a member will engage in particular behaviors (reading threads, posting replies, moderating the discussion). Our results indicate that each form of community commitment has a unique impact on each behavior, with need-based commitment predicting thread reading, affect-based commitment predicting reply posting and moderating behaviors, and obligation-based commitment predicting only moderating behavior. Researchers seeking to understand how discussion-based communities function will benefit from this more precise theorizing of how each form of member commitment relates to different kinds of online behaviors. Community managers who seek to encourage particular behaviors may use our results to target the underlying form of commitment most likely to encourage the activities they wish to promote.",
"title": ""
},
{
"docid": "8d7b0829c1172eff0aa00f34352a4c62",
"text": "As a commonly used technique in data preprocessing, feature selection selects a subset of informative attributes or variables to build models describing data. By removing redundant and irrelevant or noise features, feature selection can improve the predictive accuracy and the comprehensibility of the predictors or classifiers. Many feature selection algorithms with different selection criteria has been introduced by researchers. However, it is discovered that no single criterion is best for all applications. In this paper, we propose a framework based on a genetic algorithm (GA) for feature subset selection that combines various existing feature selection methods. The advantages of this approach include the ability to accommodate multiple feature selection criteria and find small subsets of features that perform well for a particular inductive learning algorithm of interest to build the classifier. We conducted experiments using three data sets and three existing feature selection methods. The experimental results demonstrate that our approach is a robust and effective approach to find subsets of features with higher classification accuracy and/or smaller size compared to each individual feature selection algorithm. F. Tan (B) · X. Fu · Y. Zhang · A. G. Bourgeois Department of Computer Science, Georgia State University, Atlanta, GA 30302, USA e-mail: [email protected] X. Fu e-mail: [email protected] Y. Zhang e-mail: [email protected] A. G. Bourgeois e-mail: [email protected]",
"title": ""
},
{
"docid": "8f51274de04f14293e8f345af13bab45",
"text": "It is important to find the person with right expertise and the appropriate solutions in the specific field to solve a critical situation in a large complex system such as an enterprise level application. In this paper, we apply the experts’ knowledge to construct a solution retrieval system for expert finding and problem diagnosis. Firstly, we aim to utilize the experts’ problem diagnosis knowledge which can identify the error type of problem to suggest the corresponding expert and retrieve the solution for specific error type. Therefore, how to find an efficient way to use domain knowledge and the corresponding experts has become an important issue. To transform experts’ knowledge into the knowledge base of a solution retrieval system, the idea of developing a solution retrieval system based on hybrid approach using RBR (rule-based reasoning) and CBR (case-based reasoning), RCBR (rule-based CBR), is proposed in this research. Furthermore, we incorporate domain expertise into our methodology with role-based access control model to suggest appropriate expert for problem solving, and build a prototype system with expert finding and problem diagnosis for the complex system. The experimental results show that RCBR (rule-based CBR) can improve accuracy of retrieval cases and reduce retrieval time prominently. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5ebefc9d5889cb9c7e3f83a8b38c4cb4",
"text": "As organizations start to use data-intensive cluster computing systems like Hadoop and Dryad for more applications, there is a growing need to share clusters between users. However, there is a conflict between fairness in scheduling and data locality (placing tasks on nodes that contain their input data). We illustrate this problem through our experience designing a fair scheduler for a 600-node Hadoop cluster at Facebook. To address the conflict between locality and fairness, we propose a simple algorithm called delay scheduling: when the job that should be scheduled next according to fairness cannot launch a local task, it waits for a small amount of time, letting other jobs launch tasks instead. We find that delay scheduling achieves nearly optimal data locality in a variety of workloads and can increase throughput by up to 2x while preserving fairness. In addition, the simplicity of delay scheduling makes it applicable under a wide variety of scheduling policies beyond fair sharing.",
"title": ""
},
{
"docid": "483a349f65e1524916ea0190ecf4e18b",
"text": "Physical library collections are valuable and long standing resources for knowledge and learning. However, managing books in a large bookshelf and finding books on it often leads to tedious manual work, especially for large book collections where books might be missing or misplaced. Recently, deep neural models, such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have achieved great success for scene text detection and recognition. Motivated by these recent successes, we aim to investigate their viability in facilitating book management, a task that introduces further challenges including large amounts of cluttered scene text, distortion, and varied lighting conditions. In this paper, we present a library inventory building and retrieval system based on scene text reading methods. We specifically design our scene text recognition model using rich supervision to accelerate training and achieve state-of-the-art performance on several benchmark datasets. Our proposed system has the potential to greatly reduce the amount of human labor required in managing book inventories as well as the space needed to store book information.",
"title": ""
},
{
"docid": "779280c897c09ce0017dfc7848f803b7",
"text": "With increasing storage capacities on current PCs, searching the World Wide Web has ironically become more efficient than searching one’s own personal computer. The recently introduced desktop search engines are a first step towards coping with this problem, but not yet a satisfying solution. The reason for that is that desktop search is actually quite different from its web counterpart. Documents on the desktop are not linked to each other in a way comparable to the web, which means that result ranking is poor or even inexistent, because algorithms like PageRank cannot be used for desktop search. On the other hand, desktop search could potentially profit from a lot of implicit and explicit semantic information available in emails, folder hierarchies, browser cache contexts and others. This paper investigates how to extract and store these activity based context information explicitly as RDF metadata and how to use them, as well as additional background information and ontologies, to enhance desktop search.",
"title": ""
},
{
"docid": "0e56318633147375a1058a6e6803e768",
"text": "150/150). Large-scale distributed analyses of over 30,000 MRI scans recently detected common genetic variants associated with the volumes of subcortical brain structures. Scaling up these efforts, still greater computational challenges arise in screening the genome for statistical associations at each voxel in the brain, localizing effects using “image-wide genome-wide” testing (voxelwise GWAS, vGWAS). Here we benefit from distributed computations at multiple sites to meta-analyze genome-wide image-wide data, allowing private genomic data to stay at the site where it was collected. Site-specific tensorbased morphometry (TBM) is performed with a custom template for each site, using a multi channel registration. A single vGWAS testing 10 variants against 2 million voxels can yield hundreds of TB of summary statistics, which would need to be transferred and pooled for meta-analysis. We propose a 2-step method, which reduces data transfer for each site to a subset of SNPs and voxels guaranteed to contain all significant hits.",
"title": ""
},
{
"docid": "0750f378670eee4456f756955b72cfec",
"text": "With the rapid advancement of technology, healthcare systems have been quickly transformed into a pervasive environment, where both challenges and opportunities abound. On the one hand, the proliferation of smart phones and advances in medical sensors and devices have driven the emergence of wireless body area networks for remote patient monitoring, also known as mobile-health (M-health), thereby providing a reliable and cost effective way to improving efficiency and quality of health care. On the other hand, the advances of M-health systems also generate extensive medical data, which could crowd today’s cellular networks. Device-to-device (D2D) communications have been proposed to address this challenge, but unfortunately, security threats are also emerging because of the open nature of D2D communications between medical sensors and highly privacy-sensitive nature of medical data. Even, more disconcerting is healthcare systems that have many characteristics that make them more vulnerable to privacy attacks than in other applications. In this paper, we propose a light-weight and robust security-aware D2D-assist data transmission protocol for M-health systems by using a certificateless generalized signcryption (CLGSC) technique. Specifically, we first propose a new efficient CLGSC scheme, which can adaptively work as one of the three cryptographic primitives: signcryption, signature, or encryption, but within one single algorithm. The scheme is proved to be secure, simultaneously achieving confidentiality and unforgeability. Based on the proposed CLGSC algorithm, we further design a D2D-assist data transmission protocol for M-health systems with security properties, including data confidentiality and integrity, mutual authentication, contextual privacy, anonymity, unlinkability, and forward security. Performance analysis demonstrates that the proposed protocol can achieve the design objectives and outperform existing schemes in terms of computational and communication overhead.",
"title": ""
},
{
"docid": "0141a93f93a7cf3c8ee8fd705b0a9657",
"text": "We systematically explore regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. Furthermore, we connect a maximum entropy based confidence penalty to label smoothing through the direction of the KL divergence. We exhaustively evaluate the proposed confidence penalty and label smoothing on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT’14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and the confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyperparameters, suggesting the wide applicability of these regularizers.",
"title": ""
},
{
"docid": "5bf2c4a187b35ad5c4e69aef5eb9ffea",
"text": "In the last decade, the research of the usability of mobile phones has been a newly evolving area with few established methodologies and realistic practices that ensure capturing usability in evaluation. Thus, there exists growing demand to explore appropriate evaluation methodologies that evaluate the usability of mobile phones quickly as well as comprehensively. This study aims to develop a task-based usability checklist based on heuristic evaluations in views of mobile phone user interface (UI) practitioners. A hierarchical structure of UI design elements and usability principles related to mobile phones were developed and then utilized to develop the checklist. To demonstrate the practical effectiveness of the proposed checklist, comparative experiments were conducted on the usability checklist and usability testing. The majority of usability problems found by usability testing and additional problems were discovered by the proposed checklist. It is expected that the usability checklist proposed in this study could be used quickly and efficiently by usability practitioners to evaluate the mobile phone UI in the middle of the mobile phone development process.",
"title": ""
},
{
"docid": "c2ac1c1f08e7e4ccba14ea203acba661",
"text": "This paper describes an approach to determine a layout for the order picking area in warehouses, such that the average travel distance for the order pickers is minimized. We give analytical formulas by which the average length of an order picking route can be calculated for two different routing policies. The optimal layout can be determined by using such formula as an objective function in a non-linear programming model. The optimal number of aisles in an order picking area appears to depend strongly on the required storage space and the pick list size.",
"title": ""
},
{
"docid": "44cc7de51b68b1dcc769f6f020168ca5",
"text": ". Pairs of similar examples classified dissimilarly can be cancelled out by pairs classified dissimilarly in the opposite direction, least stringent fairness requirement I Hybrid Fairness: cancellation only among cross-pairs within “buckets” – interpolates between individual and group fairness I Fairness loss minimized by constant predictors, but this incurs bad accuracy loss . How to trade off accuracy and fairness losses?",
"title": ""
},
{
"docid": "3e075d0914eb43b94f86ede42f079544",
"text": "We present an algorithm for curve skeleton extraction via Laplacian-based contraction. Our algorithm can be applied to surfaces with boundaries, polygon soups, and point clouds. We develop a contraction operation that is designed to work on generalized discrete geometry data, particularly point clouds, via local Delaunay triangulation and topological thinning. Our approach is robust to noise and can handle moderate amounts of missing data, allowing skeleton-based manipulation of point clouds without explicit surface reconstruction. By avoiding explicit reconstruction, we are able to perform skeleton-driven topology repair of acquired point clouds in the presence of large amounts of missing data. In such cases, automatic surface reconstruction schemes tend to produce incorrect surface topology. We show that the curve skeletons we extract provide an intuitive and easy-to-manipulate structure for effective topology modification, leading to more faithful surface reconstruction.",
"title": ""
},
{
"docid": "61d80b5b0c6c2b3feb1ce667babd2236",
"text": "In a recent article published in this journal, Lombard, Snyder-Duch, and Bracken (2002) surveyed 200 content analyses for their reporting of reliability tests; compared the virtues and drawbacks of five popular reliability measures; and proposed guidelines and standards for their use. Their discussion revealed that numerous misconceptions circulate in the content analysis literature regarding how these measures behave and can aid or deceive content analysts in their effort to ensure the reliability of their data. This paper proposes three conditions for statistical measures to serve as indices of the reliability of data and examines the mathematical structure and the behavior of the five coefficients discussed by the authors, plus two others. It compares common beliefs about these coefficients with what they actually do and concludes with alternative recommendations for testing reliability in content analysis and similar data-making efforts. In a recent paper published in a special issue of Human Communication Research devoted to methodological topics (Vol. 28, No. 4), Lombard, Snyder-Duch, and Bracken (2002) presented their findings of how reliability was treated in 200 content analyses indexed in Communication Abstracts between 1994 and 1998. In essence, their results showed that only 69% of the articles report reliabilities. This amounts to no significant improvements in reliability concerns over earlier studies (e.g., Pasadeos et al., 1995; Riffe & Freitag, 1996). Lombard et al. attribute the failure of consistent reporting of reliability of content analysis data to a lack of available guidelines, and they end up proposing such guidelines. Having come to their conclusions by content analytic means, Lombard et al. also report their own reliabilities, using not one, but four, indices for comparison: %-agreement; Scott‟s (1955) (pi); Cohen‟s (1960) (kappa); and Krippendorff‟s (1970, 2004) (alpha). Faulty software 1 initially led the authors to miscalculations, now corrected (Lombard et al., 2003). However, in their original article, the authors cite several common beliefs about these coefficients and make recommendations that I contend can seriously mislead content analysis researchers, thus prompting my corrective response. To put the discussion of the purpose of these indices into a larger perspective, I will have to go beyond the arguments presented in their article. Readers who might find the technical details tedious are invited to go to the conclusion, which is in the form of four recommendations. The Conservative/Liberal Continuum Lombard et al. report “general agreement (in the literature) that indices which do not account for chance agreement (%-agreement and Holsti‟s [1969] CR – actually Osgood‟s [1959, p.44] index) are too liberal while those that do (, , and ) are too conservative” (2002, p. 593). For liberal or “more lenient” coefficients, the authors recommend adopting higher critical values for accepting data as reliable than for conservative or “more stringent” ones (p. 600) – as if differences between these coefficients were merely a problem of locating them on a shared scale. Discussing reliability coefficients in terms of a conservative/liberal continuum is not widespread in the technical literature. It entered the writing on content analysis not so long ago. Neuendorf (2002) used this terminology, but only in passing. Before that, Potter and Lewine-Donnerstein (1999, p. 287) cited Perreault and Leigh‟s (1989, p. 138) assessment of the chance-corrected as being “overly conservative” and “difficult to compare (with) ... Cronbach‟s (1951) alpha,” for example – as if the comparison with a correlation coefficient mattered. I contend that trying to understand diverse agreement coefficients by their numerical results alone, conceptually placing them on a conservative/liberal continuum, is seriously misleading. Statistical coefficients are mathematical functions. They apply to a collection of data (records, values, or numbers) and result in one numerical index intended to inform its users about something – here about whether they can rely on their data. Differences among coefficients are due to responding to (a) different patterns in data and/or (b) the same patterns but in different ways. How these functions respond to which patterns of agreement and how their numerical results relate to the risk of drawing false conclusions from unreliable data – not just the numbers they produce – must be understood before selecting one coefficient over another. Issues of Scale Let me start with the ranges of the two broad classes of agreement coefficients, chancecorrected agreement and raw or %-agreement. While both kinds equal 1.000 or 100% when agreement is perfect, and data are considered reliable, %-agreement is zero when absolutely no agreement is observed; when one coder‟s categories unfailingly differ from the categories used by the other; or disagreement is systematic and extreme. Extreme disagreement is statistically almost as unexpected as perfect agreement. It should not occur, however, when coders apply the same coding instruction to the same set of units of analysis and work independently of each other, as is required when generating data for testing reliability. Where the reliability of data is an issue, the worst situation is not when one coder looks over the shoulder of another coder and selects a non-matching category, but when coders do not understand what they are asked to interpret, categorize by throwing dice, or examine unlike units of analysis, causing research results that are indistinguishable from chance events. While zero %-agreement has no meaningful reliability interpretation, chance-corrected agreement coefficients, by contrast, become zero when coders‟ behavior bears no relation to the phenomena to be coded, leaving researchers clueless as to what their data mean. Thus, the scales of chance-corrected agreement coefficients are anchored at two points of meaningful reliability interpretations, zero and one, whereas %-like agreement indices are anchored in only one, 100%, which renders all deviations from 100% uninterpretable, as far as data reliability is concerned. %-agreement has other undesirable properties; for example, it is limited to nominal data; can compare only two coders 2 ; and high %-agreement becomes progressively unlikely as more categories are available. I am suggesting that the convenience of calculating %-agreement, which is often cited as its advantage, cannot compensate for its meaninglessness. Let me hasten to add that chance-correction is not a panacea either. Chance-corrected agreement coefficients do not form a uniform class. Benini (1901), Bennett, Alpert, and Goldstein (1954), Cohen (1960), Goodman and Kruskal (1954), Krippendorff (1970, 2004), and Scott (1955) build different corrections into their coefficients, thus measuring reliability on slightly different scales. Chance can mean different things. Discussing these coefficients in terms of being conservative (yielding lower values than expected) or liberal (yielding higher values than expected) glosses over their crucial mathematical differences and privileges an intuitive sense of the kind of magnitudes that are somehow considered acceptable. If it were the issue of striking a balance between conservative and liberal coefficients, it would be easy to follow statistical practices and modify larger coefficients by squaring them and smaller coefficients by applying the square root to them. However, neither transformation would alter what these mathematical functions actually measure; only the sizes of the intervals between 0 and 1. Lombard et al., by contrast, attempt to resolve their dilemma by recommending that content analysts use several reliability measures. In their own report, they use , “an index ...known to be conservative,” but when measures below .700, they revert to %-agreement, “a liberal index,” and accept data as reliable as long as the latter is above .900 (2002, p. 596). They give no empirical justification for their choice. I shall illustrate below the kind of data that would pass their criterion. Relation Between Agreement and Reliability To be clear, agreement is what we measure; reliability is what we wish to infer from it. In content analysis, reproducibility is arguably the most important interpretation of reliability (Krippendorff, 2004, p.215). I am suggesting that an agreement coefficient can become an index of reliability only when (1) It is applied to proper reliability data. Such data result from duplicating the process of describing, categorizing, or measuring a sample of data obtained from the population of data whose reliability is in question. Typically, but not exclusively, duplications are achieved by employing two or more widely available coders or observers who, working independent of each other, apply the same coding instructions or recording devices to the same set of units of analysis. (2) It treats units of analysis as separately describable or categorizable, without, however, presuming any knowledge about the correctness of their descriptions or categories. What matters, therefore, is not truths, correlations, subjectivity, or the predictability of one particular coder‟s use of categories from that by another coder, but agreements or disagreements among multiple descriptions generated by a coding procedure, regardless of who enacts that procedure. Reproducibility is about data making, not about coders. A coefficient for assessing the reliability of data must treat coders as interchangeable and count observable coder idiosyncrasies as disagreement. (3) Its values correlate with the conditions under which one is willing to rely on imperfect data. The correlation between a measure of agreement and the rely-ability on data involves two kinds of inferences. Estimating the (dis)agreement in a population of data from the (dis)agreements observed and meas",
"title": ""
}
] |
scidocsrr
|
66050a1c07103ccbe9a127ba5523ddd0
|
Soft-to-Hard Vector Quantization for End-to-End Learned Compression of Images and Neural Networks
|
[
{
"docid": "8ac8ad61dc5357f3dc3ab1020db8bada",
"text": "We show how to learn many layers of features on color images and we use these features to initialize deep autoencoders. We then use the autoencoders to map images to short binary codes. Using semantic hashing [6], 28-bit codes can be used to retrieve images that are similar to a query image in a time that is independent of the size of the database. This extremely fast retrieval makes it possible to search using multiple di erent transformations of the query image. 256-bit binary codes allow much more accurate matching and can be used to prune the set of images found using the 28-bit codes.",
"title": ""
},
{
"docid": "fb1c9fcea2f650197b79711606d4678b",
"text": "Self-similarity based super-resolution (SR) algorithms are able to produce visually pleasing results without extensive training on external databases. Such algorithms exploit the statistical prior that patches in a natural image tend to recur within and across scales of the same image. However, the internal dictionary obtained from the given image may not always be sufficiently expressive to cover the textural appearance variations in the scene. In this paper, we extend self-similarity based SR to overcome this drawback. We expand the internal patch search space by allowing geometric variations. We do so by explicitly localizing planes in the scene and using the detected perspective geometry to guide the patch search process. We also incorporate additional affine transformations to accommodate local shape variations. We propose a compositional model to simultaneously handle both types of transformations. We extensively evaluate the performance in both urban and natural scenes. Even without using any external training databases, we achieve significantly superior results on urban scenes, while maintaining comparable performance on natural scenes as other state-of-the-art SR algorithms.",
"title": ""
},
{
"docid": "7539c44b888e21384dc266d1cf397be0",
"text": "Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of 108× and 17.7× respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at https://github.com/yiwenguo/Dynamic-Network-Surgery.",
"title": ""
},
{
"docid": "6cf9456d2fe55d2115fd40efbb1a8f96",
"text": "We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images.",
"title": ""
}
] |
[
{
"docid": "1a7c72a1353e7983c5b55c82be70488d",
"text": "education Ph.D. candidate, EECS, University of California, Berkeley, Spring 2019 (Expected). Advised by Prof. Benjamin Recht. S.M., EECS, Massachusetts Institute of Technology, Spring 2014. Advised by Prof. Samuel Madden. Thesis: Fast Transactions for Multicore In-Memory Databases. B.A., Computer Science, University of California, Berkeley, Fall 2010. B.S., Mechanical Engineering, University of California, Berkeley, Fall 2010.",
"title": ""
},
{
"docid": "ef79fbd26ad0bdc951edcdef8bcffdbf",
"text": "Question answering (Q&A) sites, where communities of volunteers answer questions, may provide faster, cheaper, and better services than traditional institutions. However, like other Web 2.0 platforms, user-created content raises concerns about information quality. At the same time, Q&A sites may provide answers of different quality because they have different communities and technological platforms. This paper compares answer quality on four Q&A sites: Askville, WikiAnswers, Wikipedia Reference Desk, and Yahoo! Answers. Findings indicate that: 1) the use of similar collaborative processes on these sites results in a wide range of outcomes. Significant differences in answer accuracy, completeness, and verifiability were found; 2) answer multiplication does not always result in better information. Answer multiplication yields more complete and verifiable answers but does not result in higher accuracy levels; and 3) a Q&A site’s popularity does not correlate with its answer quality, on all three measures.",
"title": ""
},
{
"docid": "a456e0d4a421fbae34cbbb3ca6217fa1",
"text": "Software-Defined Networking (SDN) is an emerging network architecture, centralized in the SDN controller entity, that decouples the control plane from the data plane. This controller-based solution allows programmability, and dynamic network reconfigurations, providing decision taking with global knowledge of the network. Currently, there are more than thirty SDN controllers with different features, such as communication protocol version, programming language, and architecture. Beyond that, there are also many studies about controller performance with the goal to identify the best one. However, some conclusions have been unjust because benchmark tests did not follow the same methodology, or controllers were not in the same category. Therefore, a standard benchmark methodology is essential to compare controllers fairly. The standardization can clarify and help us to understand the real behavior and weaknesses of an SDN controller. The main goal of this work-in-progress is to show existing benchmark methodologies, bringing a discussion about the need SDN controller benchmark standardization.",
"title": ""
},
{
"docid": "05532f05f969c6db5744e5dd22a6fbe4",
"text": "Lamellipodia, filopodia and membrane ruffles are essential for cell motility, the organization of membrane domains, phagocytosis and the development of substrate adhesions. Their formation relies on the regulated recruitment of molecular scaffolds to their tips (to harness and localize actin polymerization), coupled to the coordinated organization of actin filaments into lamella networks and bundled arrays. Their turnover requires further molecular complexes for the disassembly and recycling of lamellipodium components. Here, we give a spatial inventory of the many molecular players in this dynamic domain of the actin cytoskeleton in order to highlight the open questions and the challenges ahead.",
"title": ""
},
{
"docid": "4d1be9aebf7534cce625b95bde4696c6",
"text": "BlockChain (BC) has attracted tremendous attention due to its immutable nature and the associated security and privacy benefits. BC has the potential to overcome security and privacy challenges of Internet of Things (IoT). However, BC is computationally expensive, has limited scalability and incurs significant bandwidth overheads and delays which are not suited to the IoT context. We propose a tiered Lightweight Scalable BC (LSB) that is optimized for IoT requirements. We explore LSB in a smart home setting as a representative example for broader IoT applications. Low resource devices in a smart home benefit from a centralized manager that establishes shared keys for communication and processes all incoming and outgoing requests. LSB achieves decentralization by forming an overlay network where high resource devices jointly manage a public BC that ensures end-to-end privacy and security. The overlay is organized as distinct clusters to reduce overheads and the cluster heads are responsible for managing the public BC. LSB incorporates several optimizations which include algorithms for lightweight consensus, distributed trust and throughput management. Qualitative arguments demonstrate that LSB is resilient to several security attacks. Extensive simulations show that LSB decreases packet overhead and delay and increases BC scalability compared to relevant baselines.",
"title": ""
},
{
"docid": "aa29b992a92f958b7ac8ff8e1cb8cd19",
"text": "Physically unclonable functions (PUFs) provide a device-unique challenge-response mapping and are employed for authentication and encryption purposes. Unpredictability and reliability are the core requirements of PUFs: unpredictability implies that an adversary cannot sufficiently predict future responses from previous observations. Reliability is important as it increases the reproducibility of PUF responses and hence allows validation of expected responses. However, advanced machine-learning algorithms have been shown to be a significant threat to the practical validity of PUFs, as they are able to accurately model PUF behavior. The most effective technique was shown to be the XOR-based combination of multiple PUFs, but as this approach drastically reduces reliability, it does not scale well against software-based machine-learning attacks. In this paper, we analyze threats to PUF security and propose PolyPUF, a scalable and secure architecture to introduce polymorphic PUF behavior. This architecture significantly increases model-building resistivity while maintaining reliability. An extensive experimental evaluation and comparison demonstrate that the PolyPUF architecture can secure various PUF configurations and is the only evaluated approach to withstand highly complex neural network machine-learning attacks. Furthermore, we show that PolyPUF consumes less energy and has less implementation overhead in comparison to lightweight reference architectures.",
"title": ""
},
{
"docid": "db47da56df6cb45b97dd494714b994ca",
"text": "There has been a recent surge of interest in open source software development, which involves developers at many different locations and organizations sharing code to develop and refine programs. To an economist, the behavior of individual programmers and commercial companies engaged in open source projects is initially startling. This paper makes a preliminary exploration of the economics of open source software. We highlight the extent to which labor economics, especially the literature on “career concerns,” can explain many of these projects’ features. Aspects of the future of open source development process, however, remain somewhat difficult to predict with “offthe-shelf” economic models. Josh Lerner Jean Triole Harvard Business School Institut D'Economie Indutrielle (IDEI) Morgan Hall, Room 395 Manufacture des Tabacs MF529 Boston, MA 02163, 21 Allées de Brienne and NBER 31000 Toulouse Cedex FRANCE [email protected] [email protected]",
"title": ""
},
{
"docid": "a96209a2f6774062537baff5d072f72f",
"text": "In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers’ service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental issue is the management of SLAs, including SLA autonomy management or trade off among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.",
"title": ""
},
{
"docid": "d4c6ff1d5dbd8e94abb134020ae5d58a",
"text": "Considerable research demonstrates that the depletion of self-regulatory resources impairs performance on subsequent tasks that demand these resources. The current research sought to assess the impact of perceived resource depletion on subsequent task performance at both high and low levels of actual depletion. The authors manipulated perceived resource depletion by having participants 1st complete a depleting or nondepleting task before being presented with feedback that did or did not provide a situational attribution for their internal state. Participants then persisted at a problem-solving task (Experiments 1-2), completed an attention-regulation task (Experiment 3), or responded to a persuasive message (Experiment 4). The findings consistently demonstrated that individuals who perceived themselves as less (vs. more) depleted, whether high or low in actual depletion, were more successful at subsequent self-regulation. Thus, perceived regulatory depletion can impact subsequent task performance-and this impact can be independent of one's actual state of depletion.",
"title": ""
},
{
"docid": "debb1b975738fd0b3db01bbc1b2ff9f3",
"text": "An attempt to solve the collapse problem in the framework of a time-symmetric quantum formalism is reviewed. Although the proposal does not look very attractive, its concept a world defined by two quantum states, one evolving forwards and one evolving backwards in time is found to be useful in modifying the many-worlds picture of Everett’s theory.",
"title": ""
},
{
"docid": "b160d69d87ad113286ee432239b090d7",
"text": "Isogeometric analysis has been proposed as a methodology for bridging the gap between computer aided design (CAD) and finite element analysis (FEA). Although both the traditional and isogeometric pipelines rely upon the same conceptualization to solid model steps, they drastically differ in how they bring the solid model both to and through the analysis process. The isogeometric analysis process circumvents many of the meshing pitfalls experienced by the traditional pipeline by working directly within the approximation spaces used by the model representation. In this paper, we demonstrate that in a similar way as how mesh quality is used in traditional FEA to help characterize the impact of the mesh on analysis, an analogous concept of model quality exists within isogeometric analysis. The consequence of these observations is the need for a new area within modeling – analysis-aware modeling – in which model properties and parameters are selected to facilitate isogeometric analysis. ! 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7f9f32390be9a86d8a5776e9ec5fc980",
"text": "Commonly, HoG/SVM classifier uses rectangular images for HoG feature descriptor extraction and training. This means significant additional work has to be done to process irrelevant pixels belonging to the background surrounding the object of interest. While some objects may indeed be square or rectangular, most of objects are not easily representable by simple geometric shapes. In Bitmap-HoG approach we propose in this paper, the irregular shape of object is represented by a bitmap to avoid processing of extra background pixels. Bitmap, derived from the training dataset, encodes those portions of an image to be used to train a classifier. Experimental results show that not only the proposed algorithm decreases the workload associated with HoG/SVM classifiers by 75% compared to the state-of-the-art, but also it shows an average increase about 5% in recall and a decrease about 2% in precision in comparison with standard HoG.",
"title": ""
},
{
"docid": "fb38bdc5772975f9705b2ca90f819b25",
"text": "We propose a general approach to the gaze redirection problem in images that utilizes machine learning. The idea is to learn to re-synthesize images by training on pairs of images with known disparities between gaze directions. We show that such learning-based re-synthesis can achieve convincing gaze redirection based on monocular input, and that the learned systems generalize well to people and imaging conditions unseen during training. We describe and compare three instantiations of our idea. The first system is based on efficient decision forest predictors and redirects the gaze by a fixed angle in real-time (on a single CPU), being particularly suitable for the videoconferencing gaze correction. The second system is based on a deep architecture and allows gaze redirection by a range of angles. The second system achieves higher photorealism, while being several times slower. The third system is based on real-time decision forests at test time, while using the supervision from a “teacher” deep network during training. The third system approaches the quality of a teacher network in our experiments, and thus provides a highly realistic real-time monocular solution to the gaze correction problem. We present in-depth assessment and comparisons of the proposed systems based on quantitative measurements and a user study.",
"title": ""
},
{
"docid": "3294f746432ba9746a8cc8082a1021f7",
"text": "CRYPTONITE is a programmable processor tailored to the needs of crypto algorithms. The design of CRYPTONITE was based on an in-depth application analysis in which standard crypto algorithms (AES, DES, MD5, SHA-1, etc) were distilled down to their core functionality. We describe this methodology and use AES as a central example. Starting with a functional description of AES, we give a high level account of how to implement AES efficiently in hardware, and present several novel optimizations (which are independent of CRYPTONITE).We then describe the CRYPTONITE architecture, highlighting how AES implementation issues influenced the design of the processor and its instruction set. CRYPTONITE is designed to run at high clock rates and be easy to implement in silicon while providing a significantly better performance/area/power tradeoff than general purpose processors.",
"title": ""
},
{
"docid": "ea739d96ee0558fb23f0a5a020b92822",
"text": "Text and structural data mining of web and social media (WSM) provides a novel disease surveillance resource and can identify online communities for targeted public health communications (PHC) to assure wide dissemination of pertinent information. WSM that mention influenza are harvested over a 24-week period, 5 October 2008 to 21 March 2009. Link analysis reveals communities for targeted PHC. Text mining is shown to identify trends in flu posts that correlate to real-world influenza-like illness patient report data. We also bring to bear a graph-based data mining technique to detect anomalies among flu blogs connected by publisher type, links, and user-tags.",
"title": ""
},
{
"docid": "2b32e29760ba9745e59ae629c46eff93",
"text": "We present a novel recurrent neural network (RNN)–based model that combines the remembering ability of unitary evolution RNNs with the ability of gated RNNs to effectively forget redundant or irrelevant information in its memory. We achieve this by extending restricted orthogonal evolution RNNs with a gating mechanism similar to gated recurrent unit RNNs with a reset gate and an update gate. Our model is able to outperform long short-term memory, gated recurrent units, and vanilla unitary or orthogonal RNNs on several long-term-dependency benchmark tasks. We empirically show that both orthogonal and unitary RNNs lack the ability to forget. This ability plays an important role in RNNs. We provide competitive results along with an analysis of our model on many natural sequential tasks, including question answering, speech spectrum prediction, character-level language modeling, and synthetic tasks that involve long-term dependencies such as algorithmic, denoising, and copying tasks.",
"title": ""
},
{
"docid": "1cdd88ea6899afc093102990040779e2",
"text": "Available online xxxx",
"title": ""
},
{
"docid": "5288500535b3eaf67daf24071bf6300f",
"text": "All aspects of human-computer interaction, from the high-level concerns of organizational context and system requirements to the conceptual, semantic, syntactic, and lexical levels of user interface design, are ultimately funneled through physical input and output actions and devices. The fundamental task in computer input is to move information from the brain of the user to the computer. Progress in this discipline attempts to increase the useful bandwidth across that interface by seeking faster, more natural, and more convenient means for a user to transmit information to a computer. This article mentions some of the technical background for this area, surveys the range of input devices currently in use and emerging, and considers future trends in input.",
"title": ""
},
{
"docid": "9b4c240bd55523360e92dbed26cb5dc2",
"text": "CBT has been seen as an alternative to the unmanageable population of undergraduate students in Nigerian universities. This notwithstanding, the peculiar nature of some courses hinders its total implementation. This study was conducted to investigate the students’ perception of CBT for undergraduate chemistry courses in University of Ilorin. To this end, it examined the potential for using student feedback in the validation of assessment. A convenience sample of 48 students who had taken test on CBT in chemistry was surveyed and questionnaire was used for data collection. Data analysis demonstrated an auspicious characteristics of the target context for the CBT implementation as majority (95.8%) of students said they were competent with the use of computers and 75% saying their computer anxiety was only mild or low but notwithstanding they have not fully accepted the testing mode with only 29.2% in favour of it, due to the impaired validity of the test administration which they reported as being many erroneous chemical formulas, equations and structures in the test items even though they have nonetheless identified the achieved success the testing has made such as immediate scoring, fastness and transparency in marking. As quality of designed items improves and sufficient time is allotted according to the test difficulty, the test experience will become favourable for students and subsequently CBT will gain its validation in this particular context.",
"title": ""
},
{
"docid": "0b507193ca68d05a3432a9e735df5d95",
"text": "Capturing image with defocused background by using a large aperture is a widely used technique in digital single-lens reflex (DSLR) camera photography. It is also desired to provide this function to smart phones. In this paper, a new algorithm is proposed to synthesize such an effect for a single portrait image. The foreground portrait is detected using a face prior based salient object detection algorithm. Then with an improved gradient domain guided image filter, the details in the foreground are enhanced while the background pixels are blurred. In this way, the background objects are defocused and thus the foreground objects are emphasized. The resultant image looks similar to image captured using a camera with a large aperture. The proposed algorithm can be adopted in smart phones, especially for the front cameras of smart phones.",
"title": ""
}
] |
scidocsrr
|
41e544205c1822216c9b5c6043c65ba7
|
Comparative Changes of Lipid Levels in Treatment-Naive, HIV-1-Infected Adults Treated with Dolutegravir vs. Efavirenz, Raltegravir, and Ritonavir-Boosted Darunavir-Based Regimens Over 48 Weeks
|
[
{
"docid": "b9087793bd9bcc37deef95d1eea09f25",
"text": "BACKGROUND\nDolutegravir (GSK1349572), a once-daily HIV integrase inhibitor, has shown potent antiviral response and a favourable safety profile. We evaluated safety, efficacy, and emergent resistance in antiretroviral-experienced, integrase-inhibitor-naive adults with HIV-1 with at least two-class drug resistance.\n\n\nMETHODS\nING111762 (SAILING) is a 48 week, phase 3, randomised, double-blind, active-controlled, non-inferiority study that began in October, 2010. Eligible patients had two consecutive plasma HIV-1 RNA assessments of 400 copies per mL or higher (unless >1000 copies per mL at screening), resistance to two or more classes of antiretroviral drugs, and had one to two fully active drugs for background therapy. Participants were randomly assigned (1:1) to once-daily dolutegravir 50 mg or twice-daily raltegravir 400 mg, with investigator-selected background therapy. Matching placebo was given, and study sites were masked to treatment assignment. The primary endpoint was the proportion of patients with plasma HIV-1 RNA less than 50 copies per mL at week 48, evaluated in all participants randomly assigned to treatment groups who received at least one dose of study drug, excluding participants at one site with violations of good clinical practice. Non-inferiority was prespecified with a 12% margin; if non-inferiority was established, then superiority would be tested per a prespecified sequential testing procedure. A key prespecified secondary endpoint was the proportion of patients with treatment-emergent integrase-inhibitor resistance. The trial is registered at ClinicalTrials.gov, NCT01231516.\n\n\nFINDINGS\nAnalysis included 715 patients (354 dolutegravir; 361 raltegravir). At week 48, 251 (71%) patients on dolutegravir had HIV-1 RNA less than 50 copies per mL versus 230 (64%) patients on raltegravir (adjusted difference 7·4%, 95% CI 0·7 to 14·2); superiority of dolutegravir versus raltegravir was then concluded (p=0·03). Significantly fewer patients had virological failure with treatment-emergent integrase-inhibitor resistance on dolutegravir (four vs 17 patients; adjusted difference -3·7%, 95% CI -6·1 to -1·2; p=0·003). Adverse event frequencies were similar across groups; the most commonly reported events for dolutegravir versus raltegravir were diarrhoea (71 [20%] vs 64 [18%] patients), upper respiratory tract infection (38 [11%] vs 29 [8%]), and headache (33 [9%] vs 31 [9%]). Safety events leading to discontinuation were infrequent in both groups (nine [3%] dolutegravir, 14 [4%] raltegravir).\n\n\nINTERPRETATION\nOnce-daily dolutegravir, in combination with up to two other antiretroviral drugs, is well tolerated with greater virological effect compared with twice-daily raltegravir in this treatment-experienced patient group.\n\n\nFUNDING\nViiV Healthcare.",
"title": ""
}
] |
[
{
"docid": "ebc77c29a8f761edb5e4ca588b2e6fb5",
"text": "Gigantomastia by definition means bilateral benign progressive breast enlargement to a degree that requires breast reduction surgery to remove more than 1800 g of tissue on each side. It is seen at puberty or during pregnancy. The etiology for this condition is still not clear, but surgery remains the mainstay of treatment. We present a unique case of Gigantomastia, which was neither related to puberty nor pregnancy and has undergone three operations so far for recurrence.",
"title": ""
},
{
"docid": "a5c2e1723d213c21759ad9e20f3f7f15",
"text": "In this paper a nested PID steering control for lane keeping in vision based autonomous vehicles is designed to perform path following in the case of roads with an uncertain curvature. The control input is the steering wheel angle: it is designed on the basis of the yaw rate, measured by a gyroscope, and the lateral offset, measured by the vision system as the distance between the road centerline and a virtual point at a fixed distance from the vehicle. No lateral acceleration and no lateral speed measurements are required. A PI active front steering control on the yaw rate tracking error is used to reject constant disturbances and the overall effect of parameter variations while improving vehicle steering dynamics. The yaw rate reference is viewed as the control input in an external control loop: it is designed using a PID control on the lateral offset to reject the disturbances on the curvature which increase linearly with respect to time. The robustness is investigated with respect to speed variations and uncertain vehicle physical parameters: it is shown that the controlled system is asymptotically stable for all perturbations in the range of interest. Several simulations are carried out on a standard big sedan CarSim vehicle model to explore the robustness with respect to unmodelled effects such as combined lateral and longitudinal tire forces, pitch and roll. The simulations show reduced lateral offset and new stable μ-split braking manoeuvres in comparison with the CarSim model predictive steering controller implemented by CarSim.",
"title": ""
},
{
"docid": "e13b4b92c639a5b697356466e00e05c3",
"text": "In fashion retailing, the display of product inventory at the store is important to capture consumers’ attention. Higher inventory levels might allow more attractive displays and thus increase sales, in addition to avoiding stock-outs. We develop a choice model where product demand is indeed affected by inventory, and controls for product and store heterogeneity, seasonality, promotions and potential unobservable shocks in each market. We empirically test the model with daily traffic, inventory and sales data from a large retailer, at the store-day-product level. We find that the impact of inventory level on sales is positive and highly significant, even in situations of extremely high service level. The magnitude of this effect is large: each 1% increase in product-level inventory at the store increases sales of 0.58% on average. This supports the idea that inventory has a strong role in helping customers choose a particular product within the assortment. We finally describe how a retailer should optimally decide its inventory levels within a category and describe the properties of the optimal solution. Applying such optimization to our data set yields consistent and significant revenue improvements, of more than 10% for any date and store compared to current practices. Submitted: April 6, 2016. Revised: May 17, 2017",
"title": ""
},
{
"docid": "fd14310dd9a039175c075059e4ed31e4",
"text": "A new self-reconfigurable robot is presented. The robot is a hybrid chain/lattice design with several novel features. An active mechanical docking mechanism provides inter-module connection, along with optical and electrical interface. The docking mechanisms function additionally as driven wheels. Internal slip rings provide unlimited rotary motion to the wheels, allowing the modules to move independently by driving on flat surfaces, or in assemblies negotiating more complex terrain. Modules in the system are mechanically homogeneous, with three identical docking mechanisms within a module. Each mechanical dock is driven by a high torque actuator to enable movement of large segments within a multi-module structure, as well as low-speed driving. Preliminary experimental results demonstrate locomotion, mechanical docking, and lifting of a single module.",
"title": ""
},
{
"docid": "d41bbac7ec2596fe2a6503a0ac468947",
"text": "Artistic style transfer is the problem of synthesizing an image with content similar to a given image and style similar to another. Although recent feed-forward neural networks can generate stylized images in real-time, these models produce a single stylization given a pair of style/content images, and the user doesn’t have control over the synthesized output. Moreover, the style transfer depends on the hyper-parameters of the model with varying “optimum” for different input images. Therefore, if the stylized output is not appealing to the user, she/he has to try multiple models or retrain one with different hyper-parameters to get a favorite stylization. In this paper, we address these issues by proposing a novel method which allows adjustment of crucial hyper-parameters, after the training and in real-time, through a set of manually adjustable parameters. These parameters enable the user to modify the synthesized outputs from the same pair of style/content images, in search of a favorite stylized image. Our quantitative and qualitative experiments indicate how adjusting these parameters is comparable to retraining the model with different hyperparameters. We also demonstrate how these parameters can be randomized to generate results which are diverse but still very similar in style and content.",
"title": ""
},
{
"docid": "fef24d203d0a2e5d52aa887a0a442cf3",
"text": "The property that has given humans a dominant advantage over other species is not strength or speed, but intelligence. If progress in artificial intelligence continues unabated, AI systems will eventually exceed humans in general reasoning ability. A system that is “superintelligent” in the sense of being “smarter than the best human brains in practically every field” could have an enormous impact upon humanity (Bostrom 2014). Just as human intelligence has allowed us to develop tools and strategies for controlling our environment, a superintelligent system would likely be capable of developing its own tools and strategies for exerting control (Muehlhauser and Salamon 2012). In light of this potential, it is essential to use caution when developing AI systems that can exceed human levels of general intelligence, or that can facilitate the creation of such systems.",
"title": ""
},
{
"docid": "81a9c8a0314703f2c73789f46b394bfe",
"text": "In order to reproduce jaw motions and mechanics that match the human jaw function truthfully with the conception of bionics, a novel human jaw movement robot based on mechanical biomimetic principles was proposed. Firstly, based on the biomechanical properties of mandibular muscles, a jaw robot is built based on the 6-PSS parallel mechanism. Secondly, the inverse kinematics solution equations are derived. Finally, kinematics performances, such as workspace with the orientation constant, manipulability, dexterity of the jaw robot are obtained. These indices show that the parallel mechanism have a big enough flexible workspace, no singularity, and a good motion transfer performance for human chewing movement.",
"title": ""
},
{
"docid": "dbe9214608442659b9a3e1b8b3946c30",
"text": "This study aimed to explore pre-hospital delay and its associated factors in first-ever stroke registered in communities from three cities in China. The rates of delay greater than or equal to 2 hours were calculated and factors associated with delays were determined by non-conditional binary logistic regression, after adjusting for different explanatory factors. Among the 403 cases of stroke with an accurate documented time of prehospital delay, the median time (interquartile range) was 4.00 (1.50-14.00) hours. Among the 544 cases of stroke with an estimated time range of prehospital delay, 24.8% of patients were transferred to the emergency department or hospital within 2 hours, only 16.9% of patients with stroke were aware that the initial symptom represented a stroke, only 18.8% used the emergency medical service and one-third of the stroke cases were not identified by ambulance doctors. In the multivariate analyses, 8 variables or sub-variables were identified. In conclusion, prehospital delay of stroke was common in communities. Thus, intervention measures in communities should focus on education about the early identification of stroke and appropriate emergency medical service (EMS) use, as well as the development of organized stroke care.",
"title": ""
},
{
"docid": "e5d13cc2320a205972e792e6e3cd464f",
"text": "Sentiment analysis of a movie review plays an important role in understanding the sentiment conveyed by the user towards the movie. In the current work we focus on aspect based sentiment analysis of movie reviews in order to find out the aspect specific driving factors. These factors are the score given to various movie aspects and generally aspects with high driving factors direct the polarity of the review the most. The experiment showed that by giving high driving factors to Movie, Acting and Plot aspects of a movie, we obtained the highest accuracy in the analysis of movie reviews.",
"title": ""
},
{
"docid": "f4cbdcdb55e2bf49bcc62a79293f19b7",
"text": "Network slicing for 5G provides Network-as-a-Service (NaaS) for different use cases, allowing network operators to build multiple virtual networks on a shared infrastructure. With network slicing, service providers can deploy their applications and services flexibly and quickly to accommodate diverse services’ specific requirements. As an emerging technology with a number of advantages, network slicing has raised many issues for the industry and academia alike. Here, the authors discuss this technology’s background and propose a framework. They also discuss remaining challenges and future research directions.",
"title": ""
},
{
"docid": "2a258c1a2e723e998a7bad6708b542a2",
"text": "Contents Preface xi Acknowledgements xiii Endorsement xv About the authors xvii 1 A brief history of control and simulation 1",
"title": ""
},
{
"docid": "68d2150cf1e4b954be23ad6cc90ddda3",
"text": "This paper investigates changes over time in the behavior of Android ad libraries. Taking a sample of 114,000 apps, we extract and classify their ad libraries. By considering the release dates of the applications that use a specific ad library version, we estimate the release date for the library, and thus build a chronological map of the permissions used by various ad libraries over time. By considering install counts, we are able to estimate the number of times that a given library has been installed on users’ devices. We find that the use of most permissions has increased over the last several years, and that more libraries are able to use permissions that pose particular risks to user privacy and security.",
"title": ""
},
{
"docid": "de86da441c52644d255836040e1aedf0",
"text": "This paper outlines the design of a wide flare angle axially corrugated conical horn for a classical offset dual-reflector antenna system. The design minimizes the input reflection coefficient of the horn and maximizes the antenna efficiency of the antenna system by simultaneously limiting the sidelobe and cross-polarization levels to the system specifications. The effects of the number of corrugations in the horn and the number of parameters used in the optimization are also investigated.",
"title": ""
},
{
"docid": "c9993b2d046bf0e796014f2a434dc1a0",
"text": "Recently, diverse types of chaotic image encryption algorithms have been explored to meet the high demands in realizing secured real time image sharing applications. In this context, to achieve high sensitivity and superior key space, a multiple chaotic map based image encryption algorithm has been proposed. The proposed algorithm employs three-stage permutation and diffusion to withstand several attacks and the same is modelled in reconfigurable platform namely Field Programmable Gate Array (FPGA). The comprehensive analysis is done with various parameters to exhibit the robustness of the proposed algorithm and its ability to withstand brute-force, differential and statistical attacks. The synthesized result demonstrates that the reconfigurable hardware architecture takes approximately 0.098 ms for encrypting an image of size 256 × 256. Further the resource utilization and timing analyzer results are reported.",
"title": ""
},
{
"docid": "f7607bfa9590ed8be49bbc02ec099955",
"text": "Entries in microblogging sites are very short. For example, a ‘tweet’ (a post or status update on the popular microblogging site Twitter) can contain at most 140 characters. To comply with this restriction, users frequently use abbreviations to express their thoughts, thus producing sentences that are often poorly structured or ungrammatical. As a result, it becomes a challenge to come up with methods for automatically identifying named entities (names of persons, organizations, locations etc.). In this study, we use a four-step approach to automatic named entity recognition from microposts. First, we do some preprocessing of the micropost (e.g. replace abbreviations with actual words). Then we use an off-the-shelf part-of-speech tagger to tag the nouns. Next, we use the Google Search API to retrieve sentences containing the tagged nouns. Finally, we run a standard Named Entity Recognizer (NER) on the retrieved sentences. The tagged nouns are returned along with the tags assigned by the NER. This simple approach, using readily available components, yields promising results on standard benchmark data.",
"title": ""
},
{
"docid": "a1a8dc4d3c1c0d2d76e0f1cd0cb039d2",
"text": "73 generalized vertex median of a weighted graph, \" Operations Res., pp. 955-961, July 1967. and 1973, respectively. He spent two and a half years at Bell Laboratories , Murray Hill, NJ, developing telemetrized automatic surveillance and control systems. He is now Manager at Data Communications Systems, Vienna, VA, where he has major responsibilities in research and development of network analysis and design capabilities, and has applied these capabilities in the direction of projects ranging from feasability analysis and design of front end processors for the Navy to development of network architectures for the FAA. NY, responsible for contributing to the ongoing research in the areas of large network design, topological optimization for terminal access, the concentrator location problem, and flow and congestion control strategies for packet switching networks. At present, Absfruct-An algorithm is defined for establishing routing tables in the individual nodes of a data network. The routing fable at a node i specifies, for each other node j , what fraction of the traffic destined far node j should leave node i on each of the links emanating from node i. The algorithm is applied independently at each node and successively updates the routing table at that node based on information communicated between adjacent nodes about the marginal delay to each destination. For stationary input traffic statistics, the average delay per message through the network converges, with successive updates of the routing tables, to the minimum average delay over all routing assignments. The algorithm has the additional property that the traffic to each destination is guaranteed to be loop free at each iteration of the algorithm. In addition, a new global convergence theorem for non-continuous iteration algorithms is developed. INTRODUCTION T HE problem of routing assignments has been one of the most intensively studied areas in the field of data networks in recent years. These routing problems can be roughly classified as static routing, quasi-static routing, and dynamic routing. Static routing can be typified by the following type of problem. One wishes to establish a new data network and makes various assumptions about the node locations, the link locations, and the capacities of the links. Given the traffic between each source and destination, one can calculate the traffic on each link as a function of the routing of the traffic. If one approximates the queueing delays on each link as a function of the link traffic, one can …",
"title": ""
},
{
"docid": "646a1e7c1a71dc89fa92d76a19c7389e",
"text": "As modern GPUs rely partly on their on-chip memories to counter the imminent off-chip memory wall, the efficient use of their caches has become important for performance and energy. However, optimising cache locality system-atically requires insight into and prediction of cache behaviour. On sequential processors, stack distance or reuse distance theory is a well-known means to model cache behaviour. However, it is not straightforward to apply this theory to GPUs, mainly because of the parallel execution model and fine-grained multi-threading. This work extends reuse distance to GPUs by modelling: (1) the GPU's hierarchy of threads, warps, threadblocks, and sets of active threads, (2) conditional and non-uniform latencies, (3) cache associativity, (4) miss-status holding-registers, and (5) warp divergence. We implement the model in C++ and extend the Ocelot GPU emulator to extract lists of memory addresses. We compare our model with measured cache miss rates for the Parboil and PolyBench/GPU benchmark suites, showing a mean absolute error of 6% and 8% for two cache configurations. We show that our model is faster and even more accurate compared to the GPGPU-Sim simulator.",
"title": ""
},
{
"docid": "8cfdd59ba7271d48ea0d41acc2ef795a",
"text": "The Cole single-dispersion impedance model is based upon a constant phase element (CPE), a conductance parameter as a dependent parameter and a characteristic time constant as an independent parameter. Usually however, the time constant of tissue or cell suspensions is conductance dependent, and so the Cole model is incompatible with general relaxation theory and not a model of first choice. An alternative model with conductance as a free parameter influencing the characteristic time constant of the biomaterial has been analyzed. With this free-conductance model it is possible to separately follow CPE and conductive processes, and the nominal time constant no longer corresponds to the apex of the circular arc in the complex plane.",
"title": ""
},
{
"docid": "45a8fea3e8d780c65811cee79082237f",
"text": "Pedestrian dead reckoning, especially on smart-phones, is likely to play an increasingly important role in indoor tracking and navigation, due to its low cost and ability to work without any additional infrastructure. A challenge however, is that positioning, both in terms of step detection and heading estimation, must be accurate and reliable, even when the use of the device is so varied in terms of placement (e.g. handheld or in a pocket) or orientation (e.g holding the device in either portrait or landscape mode). Furthermore, the placement can vary over time as a user performs different tasks, such as making a call or carrying the device in a bag. A second challenge is to be able to distinguish between a true step and other periodic motion such as swinging an arm or tapping when the placement and orientation of the device is unknown. If this is not done correctly, then the PDR system typically overestimates the number of steps taken, leading to a significant long term error. We present a fresh approach, robust PDR (R-PDR), based on exploiting how bipedal motion impacts acquired sensor waveforms. Rather than attempting to recognize different placements through sensor data, we instead simply determine whether the motion of one or both legs impact the measurements. In addition, we formulate a set of techniques to accurately estimate the device orientation, which allows us to very accurately (typically over 99%) reject false positives. We demonstrate that regardless of device placement, we are able to detect the number of steps taken with >99.4% accuracy. R-PDR thus addresses the two main limitations facing existing PDR techniques.",
"title": ""
}
] |
scidocsrr
|
6558948207540f8c6cd843a855536f89
|
Analyzing Biases in Human Perception of User Age and Gender from Text
|
[
{
"docid": "c43785187ce3c4e7d1895b628f4a2df3",
"text": "In this paper we focus on the connection between age and language use, exploring age prediction of Twitter users based on their tweets. We discuss the construction of a fine-grained annotation effort to assign ages and life stages to Twitter users. Using this dataset, we explore age prediction in three different ways: classifying users into age categories, by life stages, and predicting their exact age. We find that an automatic system achieves better performance than humans on these tasks and that both humans and the automatic systems have difficulties predicting the age of older people. Moreover, we present a detailed analysis of variables that change with age. We find strong patterns of change, and that most changes occur at young ages.",
"title": ""
},
{
"docid": "fdc4efad14d79f1855dddddb6a30ace6",
"text": "We analyzed 700 million words, phrases, and topic instances collected from the Facebook messages of 75,000 volunteers, who also took standard personality tests, and found striking variations in language with personality, gender, and age. In our open-vocabulary technique, the data itself drives a comprehensive exploration of language that distinguishes people, finding connections that are not captured with traditional closed-vocabulary word-category analyses. Our analyses shed new light on psychosocial processes yielding results that are face valid (e.g., subjects living in high elevations talk about the mountains), tie in with other research (e.g., neurotic people disproportionately use the phrase 'sick of' and the word 'depressed'), suggest new hypotheses (e.g., an active life implies emotional stability), and give detailed insights (males use the possessive 'my' when mentioning their 'wife' or 'girlfriend' more often than females use 'my' with 'husband' or 'boyfriend'). To date, this represents the largest study, by an order of magnitude, of language and personality.",
"title": ""
}
] |
[
{
"docid": "88334287928e86a89ed8d9e5974a5d6b",
"text": "Reading is the gateway to success in education. It is the heartbeat of all courses offered in institutions. It is therefore crucial to investigate Colleges of Education students reading habits and how to improve the skill. The study was a descriptive survey with a validated questionnaire on “Reading Habits among Colleges of Education students in the Information Age” (RHCESIA). A total number of two hundred (200) students were used from the two Colleges of Education in Oyo town, with gender and age as the moderating variables. The findings showed that almost all the respondents understand the importance of reading. 65.5% love to read from their various fields of specialization on a daily basis while 25.0% love reading from their fields of specialization every week. The study confirmed that good reading habits enhance academic performance. The study recommended that courses on communication skills should be included for the first year (100 level) students and prose work and fiction such as novels should be a compulsory course for second year students (200 level)",
"title": ""
},
{
"docid": "f513165fd055b04544dff6eb5b7ec771",
"text": "Low power wide area (LPWA) networks are attracting a lot of attention primarily because of their ability to offer affordable connectivity to the low-power devices distributed over very large geographical areas. In realizing the vision of the Internet of Things, LPWA technologies complement and sometimes supersede the conventional cellular and short range wireless technologies in performance for various emerging smart city and machine-to-machine applications. This review paper presents the design goals and the techniques, which different LPWA technologies exploit to offer wide-area coverage to low-power devices at the expense of low data rates. We survey several emerging LPWA technologies and the standardization activities carried out by different standards development organizations (e.g., IEEE, IETF, 3GPP, ETSI) as well as the industrial consortia built around individual LPWA technologies (e.g., LoRa Alliance, Weightless-SIG, and Dash7 alliance). We further note that LPWA technologies adopt similar approaches, thus sharing similar limitations and challenges. This paper expands on these research challenges and identifies potential directions to address them. While the proprietary LPWA technologies are already hitting the market with large nationwide roll-outs, this paper encourages an active engagement of the research community in solving problems that will shape the connectivity of tens of billions of devices in the next decade.",
"title": ""
},
{
"docid": "0c28741df3a9bf999f4abe7b840cfb26",
"text": "In this work, we analyze taxi-GPS traces collected in Lisbon, Portugal. We perform an exploratory analysis to visualize the spatiotemporal variation of taxi services; explore the relationships between pick-up and drop-off locations; and analyze the behavior in downtime (between the previous drop-off and the following pick-up). We also carry out the analysis of predictability of taxi trips for the next pick-up area type given history of taxi flow in time and space.",
"title": ""
},
{
"docid": "cf248f6d767072a4569e31e49918dea1",
"text": "We describe resources aimed at increasing the usability of the semantic representations utilized within the DELPH-IN (Deep Linguistic Processing with HPSG) consortium. We concentrate in particular on the Dependency Minimal Recursion Semantics (DMRS) formalism, a graph-based representation designed for compositional semantic representation with deep grammars. Our main focus is on English, and specifically English Resource Semantics (ERS) as used in the English Resource Grammar. We first give an introduction to ERS and DMRS and a brief overview of some existing resources and then describe in detail a new repository which has been developed to simplify the use of ERS/DMRS. We explain a number of operations on DMRS graphs which our repository supports, with sketches of the algorithms, and illustrate how these operations can be exploited in application building. We believe that this work will aid researchers to exploit the rich and effective but complex DELPH-IN resources.",
"title": ""
},
{
"docid": "1b6adeb66afcdd69950c9dfd7cb2e54a",
"text": "The vision of the Semantic Web was coined by Tim Berners-Lee almost two decades ago. The idea describes an extension of the existing Web in which “information is given well-defined meaning, better enabling computers and people to work in cooperation” [Berners-Lee et al., 2001]. Semantic annotations in HTML pages are one realization of this vision which was adopted by large numbers of web sites in the last years. Semantic annotations are integrated into the code of HTML pages using one of the three markup languages Microformats, RDFa, or Microdata. Major consumers of semantic annotations are the search engine companies Bing, Google, Yahoo!, and Yandex. They use semantic annotations from crawled web pages to enrich the presentation of search results and to complement their knowledge bases. However, outside the large search engine companies, little is known about the deployment of semantic annotations: How many web sites deploy semantic annotations? What are the topics covered by semantic annotations? How detailed are the annotations? Do web sites use semantic annotations correctly? Are semantic annotations useful for others than the search engine companies? And how can semantic annotations be gathered from the Web in that case? The thesis answers these questions by profiling the web-wide deployment of semantic annotations. The topic is approached in three consecutive steps: In the first step, two approaches for extracting semantic annotations from the Web are discussed. The thesis evaluates first the technique of focused crawling for harvesting semantic annotations. Afterward, a framework to extract semantic annotations from existing web crawl corpora is described. The two extraction approaches are then compared for the purpose of analyzing the deployment of semantic annotations in the Web. In the second step, the thesis analyzes the overall and markup language-specific adoption of semantic annotations. This empirical investigation is based on the largest web corpus that is available to the public. Further, the topics covered by deployed semantic annotations and their evolution over time are analyzed. Subsequent studies examine common errors within semantic annotations. In addition, the thesis analyzes the data overlap of the entities that are described by semantic annotations from the same and across different web sites. The third step narrows the focus of the analysis towards use case-specific issues. Based on the requirements of a marketplace, a news aggregator, and a travel portal the thesis empirically examines the utility of semantic annotations for these use cases. Additional experiments analyze the capability of product-related semantic annotations to be integrated into an existing product categorization schema. Especially, the potential of exploiting the diverse category information given by the web sites providing semantic annotations is evaluated.",
"title": ""
},
{
"docid": "080f76412f283fb236c28678bf9dada8",
"text": "We describe a new algorithm for robot localization, efficient both in terms of memory and processing time. It transforms a stream of laser range sensor data into a probabilistic calculation of the robot’s position, using a bidirectional Long Short-Term Memory (LSTM) recurrent neural network (RNN) to learn the structure of the environment and to answer queries such as: in which room is the robot? To achieve this, the RNN builds an implicit map of the environment.",
"title": ""
},
{
"docid": "8e094cb05d16c73d7bf7c2cbb553873d",
"text": "In this paper, the design of command to line-of-sight (CLOS) missile guidance law is addressed. Taking a three dimensional guidance model, the tracking control problem is formulated. To solve the target tracking problem, the feedback linearization controller is first designed. Although such control scheme possesses the simplicity property, but it presents the acceptable performance only in the absence of perturbations. In order to ensure the robustness properties against model uncertainties, a fuzzy adaptive algorithm is proposed with two parts including a fuzzy (Mamdani) system, whose rules are constructed based on missile guidance, and a so-called rule modifier to compensate the fuzzy rules, using the negative gradient method. Compared with some previous works, such control strategy provides a faster time response without large control efforts. The performance of feedback linearization controller is also compared with that of fuzzy adaptive strategy via various simulations.",
"title": ""
},
{
"docid": "5d5c4225b67ad8ca31f2d4f005dfa6ce",
"text": "Nurse residency programs have been developed with the goal of helping newly licensed nurses successfully transition to independent practice. The authors propose that all newly licensed nurses hired in acute care hospitals be required to complete an accredited residency program. An evidence table examines the state of the science related to transition-to-practice programs and provides the basis for recommendations.",
"title": ""
},
{
"docid": "05e4f3b88aa94a5dc3fcdd3d94ee21b7",
"text": "There has always been criticism for using ngram based similarity metrics, such as BLEU, NIST, etc, for evaluating the performance of NLG systems. However, these metrics continue to remain popular and are recently being used for evaluating the performance of systems which automatically generate questions from documents, knowledge graphs, images, etc. Given the rising interest in such automatic question generation (AQG) systems, it is important to objectively examine whether these metrics are suitable for this task. In particular, it is important to verify whether such metrics used for evaluating AQG systems focus on answerability of the generated question by preferring questions which contain all relevant information such as question type (Wh-types), entities, relations, etc. In this work, we show that current automatic evaluation metrics based on n-gram similarity do not always correlate well with human judgments about answerability of a question. To alleviate this problem and as a first step towards better evaluation metrics for AQG, we introduce a scoring function to capture answerability and show that when this scoring function is integrated with existing metrics, they correlate significantly better with human judgments. The scripts and data developed as a part of this work are made publicly available.1",
"title": ""
},
{
"docid": "b5e603ef5cae02919f7574d07347db38",
"text": "In this paper, we propose a novel approach for traffic accident anticipation through (i) Adaptive Loss for Early Anticipation (AdaLEA) and (ii) a large-scale self-annotated incident database for anticipation. The proposed AdaLEA allows a model to gradually learn an earlier anticipation as training progresses. The loss function adaptively assigns penalty weights depending on how early the model can anticipate a traffic accident at each epoch. Additionally, we construct a Near-miss Incident DataBase for anticipation. This database contains an enormous number of traffic near-miss incident videos and annotations for detail evaluation of two tasks, risk anticipation and risk-factor anticipation. In our experimental results, we found our proposal achieved the highest scores for risk anticipation (+6.6% better on mean average precision (mAP) and 2.36 sec earlier than previous work on the average time-to-collision (ATTC)) and risk-factor anticipation (+4.3% better on mAP and 0.70 sec earlier than previous work on ATTC).",
"title": ""
},
{
"docid": "b4e5153f7592394e8743bc0fdee40dcc",
"text": "This paper is focussed on the modelling and control of a hydraulically-driven biologically-inspired robotic leg. The study is part of a larger project aiming at the development of an autonomous quadruped robot (hyQ) for outdoor operations. The leg has two hydraulically-actuated degrees of freedom (DOF), the hip and knee joints. The actuation system is composed of proportional valves and asymmetric cylinders. After a brief description of the prototype leg, the paper shows the development of a comprehensive model of the leg where critical parameters have been experimentally identified. Subsequently the leg control design is presented. The core of this work is the experimental assessment of the pros and cons of single-input single-output (SISO) vs. multiple-input multiple-output (MIMO) and linear vs. nonlinear control algorithms in this application (the leg is a coupled multivariable system driven by nonlinear actuators). The control schemes developed are a conventional PID (linear SISO), a Linear Quadratic Regulator (LQR) controller (linear MIMO) and a Feedback Linearisation (FL) controller (nonlinear MIMO). LQR performs well at low frequency but its behaviour worsens at higher frequencies. FL produces the fastest response in simulation, but when implemented is sensitive to parameters uncertainty and needs to be properly modified to achieve equally good performance also in the practical implementation.",
"title": ""
},
{
"docid": "b100ca202f99e3ee086cd61f01349a30",
"text": "This paper is concerned with inertial-sensor-based tracking of the gravitation direction in mobile devices such as smartphones. Although this tracking problem is a classical one, choosing a good state-space for this problem is not entirely trivial. Even though for many other orientation related tasks a quaternion-based representation tends to work well, for gravitation tracking their use is not always advisable. In this paper we present a convenient linear quaternion-free state-space model for gravitation tracking. We also discuss the efficient implementation of the Kalman filter and smoother for the model. Furthermore, we propose an adaption mechanism for the Kalman filter which is able to filter out shot-noises similarly as has been proposed in context of adaptive and robust Kalman filtering. We compare the proposed approach to other approaches using measurement data collected with a smartphone.",
"title": ""
},
{
"docid": "cddef4fbdacd9ec0510369ffcc715cea",
"text": "This paper develops a multi-frame image super-resolution approach from a Bayesian view-point by marginalizing over the unknown registration parameters relating the set of input low-resolution views. In Tipping and Bishop’s Bayesian image super-resolution approach [16], the marginalization was over the superresolution image, necessitating the use of an unfavorable image prior. By integrating over the registration parameters rather than the high-resolution image, our method allows for more realistic prior distributions, and also reduces the dimension of the integral considerably, removing the main computational bottleneck of the other algorithm. In addition to the motion model used by Tipping and Bishop, illumination components are introduced into the generative model, allowing us to handle changes in lighting as well as motion. We show results on real and synthetic datasets to illustrate the efficacy of this approach.",
"title": ""
},
{
"docid": "cd286f4dfd11ee4585436d34bb756867",
"text": "BACKGROUND\nTargeted therapies have markedly changed the treatment of cancer over the past 10 years. However, almost all tumors acquire resistance to systemic treatment as a result of tumor heterogeneity, clonal evolution, and selection. Although genotyping is the most currently used method for categorizing tumors for clinical decisions, tumor tissues provide only a snapshot, or are often difficult to obtain. To overcome these issues, methods are needed for a rapid, cost-effective, and noninvasive identification of biomarkers at various time points during the course of disease. Because cell-free circulating tumor DNA (ctDNA) is a potential surrogate for the entire tumor genome, the use of ctDNA as a liquid biopsy may help to obtain the genetic follow-up data that are urgently needed.\n\n\nCONTENT\nThis review includes recent studies exploring the diagnostic, prognostic, and predictive potential of ctDNA as a liquid biopsy in cancer. In addition, it covers biological and technical aspects, including recent advances in the analytical sensitivity and accuracy of DNA analysis as well as hurdles that have to be overcome before implementation into clinical routine.\n\n\nSUMMARY\nAlthough the analysis of ctDNA is a promising area, and despite all efforts to develop suitable tools for a comprehensive analysis of tumor genomes from plasma DNA, the liquid biopsy is not yet routinely used as a clinical application. Harmonization of preanalytical and analytical procedures is needed to provide clinical standards to validate the liquid biopsy as a clinical biomarker in well-designed and sufficiently powered multicenter studies.",
"title": ""
},
{
"docid": "260f7258c3739efec1910028ec429471",
"text": "Cryptography is considered to be a disciple of science of achieving security by converting sensitive information to an un-interpretable form such that it cannot be interpreted by anyone except the transmitter and intended recipient. An innumerable set of cryptographic schemes persist in which each of it has its own affirmative and feeble characteristics. In this paper we have we have developed a traditional or character oriented Polyalphabetic cipher by using a simple algebraic equation. In this we made use of iteration process and introduced a key K0 obtained by permuting the elements of a given key seed value. This key strengthens the cipher and it does not allow the cipher to be broken by the known plain text attack. The cryptanalysis performed clearly indicates that the cipher is a strong one.",
"title": ""
},
{
"docid": "80b5030cbb923f32dc791409eb184a80",
"text": "Bayesian Optimisation (BO) refers to a class of methods for global optimisation of a function f which is only accessible via point evaluations. It is typically used in settings where f is expensive to evaluate. A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model. Conventional BO methods have focused on Euclidean and categorical domains, which, in the context of model selection, only permits tuning scalar hyper-parameters of machine learning algorithms. However, with the surge of interest in deep learning, there is an increasing demand to tune neural network architectures. In this work, we develop NASBOT, a Gaussian process based BO framework for neural architecture search. To accomplish this, we develop a distance metric in the space of neural network architectures which can be computed efficiently via an optimal transport program. This distance might be of independent interest to the deep learning community as it may find applications outside of BO. We demonstrate that NASBOT outperforms other alternatives for architecture search in several cross validation based model selection tasks on multi-layer perceptrons and convolutional neural networks.",
"title": ""
},
{
"docid": "142f47f01a81b7978f65ea63460d98e5",
"text": "The developers of StarDog OWL/RDF DBMS have pioneered a new use of OWL as a schema language for RDF databases. This is achieved by adding integrity constraints (IC), also expressed in OWL syntax, to the traditional “open-world” OWL axioms. The new database paradigm requires a suitable visual schema editor. We propose here a two-level approach for integrated visual UML-style editing of extended OWL+IC ontologies: (i) introduce the notion of ontology splitter that can be used in conjunction with any OWL editor, and (ii) offer a custom graphical notation for axiom level annotations on the basis of compact UML-style OWL ontology editor OWLGrEd.",
"title": ""
},
{
"docid": "066ea4047e2811c6991b60befe28b2d5",
"text": "This study theorized and validated a model of knowledge sharing continuance in a special type of online community, the online question answering (Q&A) community, in which knowledge exchange is reflected mainly by asking and answering specific questions. We created a model that integrated knowledge sharing factors and knowledge self-efficacy into the expectation confirmation theory. The hypotheses derived ontinuance intention to answer questions enefits of answering questions atisfaction onfirmation nowledge self-efficacy from this model were empirically validated using an online survey conducted among users of a famous online Q&A community in China, “Yahoo! Answers China”. The results suggested that users’ intention to continue sharing knowledge (i.e., answering questions) was directly influenced by users’ ex-post feelings as consisting of two dimensions: satisfaction, and knowledge self-efficacy. Based on the obtained results, we also found that knowledge self-efficacy and confirmation mediated the relationship between benefits and satisfaction.",
"title": ""
},
{
"docid": "961d65a35edafc6250f0c45b1152606d",
"text": "By taking a global perspective in order to look at how the field of Medicine has diversified, we believe that we can come to see how the Family Physician has, over time, disappeared from it. Prior to the idea of the Family Physician, a single, non-specialist, Physician would be responsible for the oversight, diagnosis and treatment of a number of diseases. However, due to the ever increasing number of illnesses and diseases, over time, the role of the Family Physician became minimized, as Specialist Physicians of particular illnesses and diseases began emerging at speed. We now find ourselves, particularly in Pakistan, in dire need of more Family Physicians. In this review we look at how, in other parts of the world, Family Physicians – also referred to as General Practitioners, or ‘GPs’ in parts of Europe – are responsible for the preventive and curative provision to whole families. The principles of Family Medicine, we argue, are universal, and there are few contextual factors, such as geography, the availability of material and medical resources, as well as disease prevalence, that impact the practice of Family Physicians from country to country. By looking at these factors and by summing up the countries in which they are endemic, we believe we can then establish a link between the similarities in each First World Country, and provide a contrast between those and the things that impact practice in Third World Countries. We also examine the role of regulation, qualifications and contemporary primary and family care in Pakistan.",
"title": ""
},
{
"docid": "ef54b4816ef97097b4fdc5e6a0c423dd",
"text": "The need of a deeper understanding of coreless machines arises with new magnetic materials with higher remanent magnetization and the spread of high speed motors and generators. High energy density magnets allow complete ironless stator motor/generators configurations which are suitable for high speed machines and specifically in flywheel energy storage. Axial-flux and radial-flux machines are investigated and compared. The limits and merits of ironless machines are presented.",
"title": ""
}
] |
scidocsrr
|
6532d09dc1878c49ec4b06865aa496bf
|
A Minimal Closed-form Solution for the Perspective Three Orthogonal Angles (P3oA) Problem: Application To Visual Odometry
|
[
{
"docid": "2bdf54197687b7947bbad4ae45db6ae3",
"text": "The projections of world parallel lines in an image intersect at a single point called the vanishing point (VP). VPs are a key ingredient for various vision tasks including rotation estimation and 3D reconstruction. Urban environments generally exhibit some dominant orthogonal VPs. Given a set of lines extracted from a calibrated image, this paper aims to (1) determine the line clustering, i.e. find which line belongs to which VP, and (2) estimate the associated orthogonal VPs. None of the existing methods is fully satisfactory because of the inherent difficulties of the problem, such as the local minima and the chicken-and-egg aspect. In this paper, we present a new algorithm that solves the problem in a mathematically guaranteed globally optimal manner and can inherently enforce the VP orthogonality. Specifically, we formulate the task as a consensus set maximization problem over the rotation search space, and further solve it efficiently by a branch-and-bound procedure based on the Interval Analysis theory. Our algorithm has been validated successfully on sets of challenging real images as well as synthetic data sets.",
"title": ""
},
{
"docid": "5dac8ef81c7a6c508c603b3fd6a87581",
"text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.",
"title": ""
}
] |
[
{
"docid": "69c65ed870be8074d21ed1cfd0a42a2f",
"text": "With the popularity of online multimedia videos, there has been much interest in recent years in acoustic event detection and classification for the improvement of online video search. The audio component of a video has the potential to contribute significantly to multimedia event classification. Recent research in audio document classification has drawn parallels to text and image document retrieval by employing what is referred to as the bag-of-audio words (BoAW) method. Compared to supervised approaches where audio concept detectors are trained using annotated data and extracted labels are used as lowlevel features for multimedia event classification. The BoAW approach extracts audio concepts in an unsupervised fashion. Hence this method has the advantage that it can be employed easily for a new set of audio concepts in multimedia videos without going through a laborious annotation effort. In this paper, we explore variations of the BoAW method and present results on NIST 2011 multimedia event detection (MED) dataset.",
"title": ""
},
{
"docid": "6fbf6d6357705d8d48d94ca47ca61fa9",
"text": "Driven by the rapid development of Internet and digital technologies, we have witnessed the explosive growth of Web images in recent years. Seeing that labels can reflect the semantic contents of the images, automatic image annotation, which can further facilitate the procedure of image semantic indexing, retrieval, and other image management tasks, has become one of the most crucial research directions in multimedia. Most of the existing annotation methods, heavily rely on well-labeled training data (expensive to collect) and/or single view of visual features (insufficient representative power). In this paper, inspired by the promising advance of feature engineering (e.g., CNN feature and scale-invariant feature transform feature) and inexhaustible image data (associated with noisy and incomplete labels) on the Web, we propose an effective and robust scheme, termed robust multi-view semi-supervised learning (RMSL), for facilitating image annotation task. Specifically, we exploit both labeled images and unlabeled images to uncover the intrinsic data structural information. Meanwhile, to comprehensively describe an individual datum, we take advantage of the correlated and complemental information derived from multiple facets of image data (i.e., multiple views or features). We devise a robust pairwise constraint on outcomes of different views to achieve annotation consistency. Furthermore, we integrate a robust classifier learning component via $\\ell _{2,p}$ loss, which can provide effective noise identification power during the learning process. Finally, we devise an efficient iterative algorithm to solve the optimization problem in RMSL. We conduct comprehensive experiments on three different data sets, and the results illustrate that our proposed approach is promising for automatic image annotation.",
"title": ""
},
{
"docid": "f702a8c28184a6d49cd2f29a1e4e7ea4",
"text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.",
"title": ""
},
{
"docid": "070a1de608a35cddb69b84d5f081e94d",
"text": "Identifying potentially vulnerable locations in a code base is critical as a pre-step for effective vulnerability assessment; i.e., it can greatly help security experts put their time and effort to where it is needed most. Metric-based and pattern-based methods have been presented for identifying vulnerable code. The former relies on machine learning and cannot work well due to the severe imbalance between non-vulnerable and vulnerable code or lack of features to characterize vulnerabilities. The latter needs the prior knowledge of known vulnerabilities and can only identify similar but not new types of vulnerabilities. In this paper, we propose and implement a generic, lightweight and extensible framework, LEOPARD, to identify potentially vulnerable functions through program metrics. LEOPARD requires no prior knowledge about known vulnerabilities. It has two steps by combining two sets of systematically derived metrics. First, it uses complexity metrics to group the functions in a target application into a set of bins. Then, it uses vulnerability metrics to rank the functions in each bin and identifies the top ones as potentially vulnerable. Our experimental results on 11 real-world projects have demonstrated that, LEOPARD can cover 74.0% of vulnerable functions by identifying 20% of functions as vulnerable and outperform machine learning-based and static analysis-based techniques. We further propose three applications of LEOPARD for manual code review and fuzzing, through which we discovered 22 new bugs in real applications like PHP, radare2 and FFmpeg, and eight of them are new vulnerabilities.",
"title": ""
},
{
"docid": "baa71f083831919a067322ab4b268db5",
"text": "– The theoretical analysis gives an overview of the functioning of DDS, especially with respect to noise and spurs. Different spur reduction techniques are studied in detail. Four ICs, which were the circuit implementations of the DDS, were designed. One programmable logic device implementation of the CORDIC based quadrature amplitude modulation (QAM) modulator was designed with a separate D/A converter IC. For the realization of these designs some new building blocks, e.g. a new tunable error feedback structure and a novel and more cost-effective digital power ramp generator, were developed. Implementing a DDS on an FPGA using Xilinx’s ISE software. IndexTerms—CORDIC, DDS, NCO, FPGA, SFDR. ________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "5cc3ce9628b871d57f086268ae1510e0",
"text": "Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power system, and to help the customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using Deep Reinforcement Learning, a hybrid type of methods that combines Reinforcement Learning with Deep Learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and Deep Policy Gradient, both of them being extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly-dimensional database includes information about photovoltaic power generation, electric vehicles as well as buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide realtime feedback to consumers to encourage more efficient use of electricity.",
"title": ""
},
{
"docid": "3cf1197436af89889edc04cae8acfb0f",
"text": "The rapid growth of new radio technologies for Smart City/Building/Home applications means that models of cross-technology interference are needed to inform the development of higher layer protocols and applications. We systematically investigate interference interactions between LoRa and IEEE 802.15.4g networks. Our results show that LoRa can obtain high packet reception rates, even in presence of strong IEEE 802.15.4g interference. IEEE 802.15.4g is also shown to have some resilience to LoRa interference. Both effects are highly dependent on the LoRa radio's spreading factor and bandwidth configuration, as well as on the channelization. The results are shown to arise from the interaction between the two radios' modulation schemes. The data have implications for the design and analysis of protocols for both radio technologies.",
"title": ""
},
{
"docid": "41f7d66c6e2c593eb7bda22c72a7c048",
"text": "Artificial neural networks are algorithms that can be used to perform nonlinear statistical modeling and provide a new alternative to logistic regression, the most commonly used method for developing predictive models for dichotomous outcomes in medicine. Neural networks offer a number of advantages, including requiring less formal statistical training, ability to implicitly detect complex nonlinear relationships between dependent and independent variables, ability to detect all possible interactions between predictor variables, and the availability of multiple training algorithms. Disadvantages include its \"black box\" nature, greater computational burden, proneness to overfitting, and the empirical nature of model development. An overview of the features of neural networks and logistic regression is presented, and the advantages and disadvantages of using this modeling technique are discussed.",
"title": ""
},
{
"docid": "824920b0b2a3deebf1a6692cdc72b019",
"text": "Neuropathology involving TAR DNA binding protein-43 (TDP-43) has been identified in a wide spectrum of neurodegenerative diseases collectively named as TDP-43 proteinopathy, including amyotrophic lateral sclerosis (ALS) and frontotemporal lobar dementia (FTLD). To test whether increased expression of wide-type human TDP-43 (hTDP-43) may cause neurotoxicity in vivo, we generated transgenic flies expressing hTDP-43 in various neuronal subpopulations. Expression in the fly eyes of the full-length hTDP-43, but not a mutant lacking its amino-terminal domain, led to progressive loss of ommatidia with remarkable signs of neurodegeneration. Expressing hTDP-43 in mushroom bodies (MBs) resulted in dramatic axon losses and neuronal death. Furthermore, hTDP-43 expression in motor neurons led to axon swelling, reduction in axon branches and bouton numbers, and motor neuron loss together with functional deficits. Thus, our transgenic flies expressing hTDP-43 recapitulate important neuropathological and clinical features of human TDP-43 proteinopathy, providing a powerful animal model for this group of devastating diseases. Our study indicates that simply increasing hTDP-43 expression is sufficient to cause neurotoxicity in vivo, suggesting that aberrant regulation of TDP-43 expression or decreased clearance of hTDP-43 may contribute to the pathogenesis of TDP-43 proteinopathy.",
"title": ""
},
{
"docid": "b454f55bf2b896fab6514035874add0f",
"text": "1 University of Banjaluka, Faculty of Electrical Engineering, Patre 5, 78000 Banjaluka, Bosnia and Herzegovina, E-mail: nemanjapoprzen@gmail 2 University of Banjaluka, Faculty of Electrical Engineering, Patre 5, 78000 Banjaluka, Bosnia and Herzegovina, E-mail: [email protected] Abstract: This report will describe the theories and techniques for shrinking the size of an antenna through the use of fractals. Fractal antennas can obtain radiation pattern and input impedance similar to a longer antenna, yet take less area due to the many contours of the shape. Fractal antennas are a fairly new research area and are likely to have a promising future in many applications.",
"title": ""
},
{
"docid": "ba4f3060a36021ef60f7bc6c9cde9d35",
"text": "Neural Networks (NN) are today increasingly used in Machine Learning where they have become deeper and deeper to accurately model or classify high-level abstractions of data. Their development however also gives rise to important data privacy risks. This observation motives Microsoft researchers to propose a framework, called Cryptonets. The core idea is to combine simplifications of the NN with Fully Homomorphic Encryptions (FHE) techniques to get both confidentiality of the manipulated data and efficiency of the processing. While efficiency and accuracy are demonstrated when the number of non-linear layers is small (eg 2), Cryptonets unfortunately becomes ineffective for deeper NNs which let the problem of privacy preserving matching open in these contexts. This work successfully addresses this problem by combining the original ideas of Cryptonets’ solution with the batch normalization principle introduced at ICML 2015 by Ioffe and Szegedy. We experimentally validate the soundness of our approach with a neural network with 6 non-linear layers. When applied to the MNIST database, it competes the accuracy of the best non-secure versions, thus significantly improving Cryptonets.",
"title": ""
},
{
"docid": "8949e00d17210c805712bb360a76d157",
"text": "The objective of this study was to describe the development and initial psychometric analysis of the UK English version of the Duchenne muscular dystrophy Functional Ability Self-Assessment Tool (DMDSAT), a patient-reported outcome (PRO) scale designed to measure functional ability in patients with Duchenne muscular dystrophy (DMD). Item selection was made by neuromuscular specialists and a Rasch analysis was performed to understand the psychometric properties of the DMDSAT. Instrument scores were also linked to cost of illness and health-related quality of life data. The administered version, completed by 186 UK patient-caregivers pairs, included eight items in four domains: Arm function, Mobility, Transfers, and Ventilation status. These items together successfully operationalized functional ability in DMD, with excellent targeting and reliability (Person Separation Index: 0.95; Cronbach's α: 0.93), stable item locations, and good fit to the Rasch model (mean person/item fit residual: -0.21/-0.44, SD: 0.32/1.28). Estimated item difficulty was in excellent agreement with clinical opinion (Spearman's ρ: 0.95) and instrument scores mapped well onto health economic outcomes. We show that the DMDSAT is a PRO instrument fit for purpose to measure functional ability in ambulant and non-ambulant patients with DMD. Rasch analysis augments clinical expertise in the development of robust rating scales.",
"title": ""
},
{
"docid": "44bebd3c18e1f8929b470f0dbfd7251b",
"text": "In this paper model for analysis electric DC drive made in Matlab Simulink and Matlab SimPower Systems is given. Basic mathematical formulation which describes DC motor is given. Existing laboratory motor is described. Simulation model of DC motor drive and model of discontinuous load is made. Comparison of model made in Matlab Simulink and existing model in SimPower Systems is given. Essential parameters for starting simulation of used DC motor drive is given. Dynamical characteristics of DC motor drive as results of both simulation are shown. Practical use of simulation model is proposed. Keywords— analysis, DC drive, Matlab, SimPower Systems, model, simulation.",
"title": ""
},
{
"docid": "c4ea4a88c12cfdd27c7dfd134a58b79a",
"text": "We study the strength of certain obfuscation techniques used to protect software from reverse engineering and tampering. We show that some common obfuscation methods can be defeated using a fault injection attack, namely an attack where during program execution an attacker injects errors into the program environment. By observing how the program fails under certain errors the attacker can deduce the obfuscated information in the program code without having to unravel the obfuscation mechanism. We apply this technique to extract a secret key from a block cipher obfuscated using a commercial obfuscation tool and draw conclusions on preventing this weakness.",
"title": ""
},
{
"docid": "7eed5e11e47807a3ff0af21461e88385",
"text": "We propose Attentive Regularization (AR), a method to constrain the activation maps of kernels in Convolutional Neural Networks (CNNs) to specific regions of interest (ROIs). Each kernel learns a location of specialization along with its weights through standard backpropagation. A differentiable attention mechanism requiring no additional supervision is used to optimize the ROIs. Traditional CNNs of different types and structures can be modified with this idea into equivalent Targeted Kernel Networks (TKNs), while keeping the network size nearly identical. By restricting kernel ROIs, we reduce the number of sliding convolutional operations performed throughout the network in its forward pass, speeding up both training and inference. We evaluate our proposed architecture on both synthetic and natural tasks across multiple domains. TKNs obtain significant improvements over baselines, requiring less computation (around an order of magnitude) while achieving superior performance.",
"title": ""
},
{
"docid": "27487316cbda79a378b706d19d53178f",
"text": "Pallister-Killian syndrome (PKS) is a congenital disorder attributed to supernumerary isochromosome 12p mosaicism. Craniofacial dysmorphism, learning impairment and seizures are considered cardinal features. However, little is known regarding the seizure and epilepsy patterns in PKS. To better define the prevalence and spectrum of seizures in PKS, we studied 51 patients (39 male, 12 female; median age 4 years and 9 months; age range 7 months to 31 years) with confirmed 12p tetrasomy. Using a parent-based structured questionnaire, we collected data regarding seizure onset, frequency, timing, semiology, and medication therapy. Patients were recruited through our practice, at PKS Kids family events, and via the PKS Kids website. Epilepsy occurred in 27 (53%) with 23 (85%) of those with seizures having seizure onset prior to 3.5 years of age. Mean age at seizure onset was 2 years and 4 months. The most common seizure types were myoclonic (15/27, 56%), generalized convulsions (13/27, 48%), and clustered tonic spasms (similar to infantile spasms; 8/27, 30%). Thirteen of 27 patients with seizures (48%) had more than one seizure type with 26 out of 27 (96%) ever having taken antiepileptic medications. Nineteen of 27 (70%) continued to have seizures and 17/27 (63%) remained on antiepileptic medication. The most commonly used medications were: levetiracetam (10/27, 37%), valproic acid (10/27, 37%), and topiramate (9/27, 33%) with levetiracetam felt to be \"most helpful\" by parents (6/27, 22%). Further exploration of seizure timing, in-depth analysis of EEG recordings, and collection of MRI data to rule out confounding factors is warranted.",
"title": ""
},
{
"docid": "8090f6eff6db1bb92599ecc26698d15f",
"text": "BACKGROUND\nSelf-compassion is a key psychological construct for assessing clinical outcomes in mindfulness-based interventions. The aim of this study was to validate the Spanish versions of the long (26 item) and short (12 item) forms of the Self-Compassion Scale (SCS).\n\n\nMETHODS\nThe translated Spanish versions of both subscales were administered to two independent samples: Sample 1 was comprised of university students (n = 268) who were recruited to validate the long form, and Sample 2 was comprised of Aragon Health Service workers (n = 271) who were recruited to validate the short form. In addition to SCS, the Mindful Attention Awareness Scale (MAAS), the State-Trait Anxiety Inventory-Trait (STAI-T), the Beck Depression Inventory (BDI) and the Perceived Stress Questionnaire (PSQ) were administered. Construct validity, internal consistency, test-retest reliability and convergent validity were tested.\n\n\nRESULTS\nThe Confirmatory Factor Analysis (CFA) of the long and short forms of the SCS confirmed the original six-factor model in both scales, showing goodness of fit. Cronbach's α for the 26 item SCS was 0.87 (95% CI = 0.85-0.90) and ranged between 0.72 and 0.79 for the 6 subscales. Cronbach's α for the 12-item SCS was 0.85 (95% CI = 0.81-0.88) and ranged between 0.71 and 0.77 for the 6 subscales. The long (26-item) form of the SCS showed a test-retest coefficient of 0.92 (95% CI = 0.89-0.94). The Intraclass Correlation (ICC) for the 6 subscales ranged from 0.84 to 0.93. The short (12-item) form of the SCS showed a test-retest coefficient of 0.89 (95% CI: 0.87-0.93). The ICC for the 6 subscales ranged from 0.79 to 0.91. The long and short forms of the SCS exhibited a significant negative correlation with the BDI, the STAI and the PSQ, and a significant positive correlation with the MAAS. The correlation between the total score of the long and short SCS form was r = 0.92.\n\n\nCONCLUSION\nThe Spanish versions of the long (26-item) and short (12-item) forms of the SCS are valid and reliable instruments for the evaluation of self-compassion among the general population. These results substantiate the use of this scale in research and clinical practice.",
"title": ""
},
{
"docid": "cc05dca89bf1e3f53cf7995e547ac238",
"text": "Ensembles of randomized decision trees, known as Random Forests, have become a valuable machine learning tool for addressing many computer vision problems. Despite their popularity, few works have tried to exploit contextual and structural information in random forests in order to improve their performance. In this paper, we propose a simple and effective way to integrate contextual information in random forests, which is typically reflected in the structured output space of complex problems like semantic image labelling. Our paper has several contributions: We show how random forests can be augmented with structured label information and be used to deliver structured low-level predictions. The learning task is carried out by employing a novel split function evaluation criterion that exploits the joint distribution observed in the structured label space. This allows the forest to learn typical label transitions between object classes and avoid locally implausible label configurations. We provide two approaches for integrating the structured output predictions obtained at a local level from the forest into a concise, global, semantic labelling. We integrate our new ideas also in the Hough-forest framework with the view of exploiting contextual information at the classification level to improve the performance on the task of object detection. Finally, we provide experimental evidence for the effectiveness of our approach on different tasks: Semantic image labelling on the challenging MSRCv2 and CamVid databases, reconstruction of occluded handwritten Chinese characters on the Kaist database and pedestrian detection on the TU Darmstadt databases.",
"title": ""
},
{
"docid": "aee708f75f1a8a95d62b139526e84780",
"text": "Data centers are experiencing an exponential increase in the amount of network traffic that they have to sustain due to cloud computing and several emerging web applications. To face this network load, large data centers are required with thousands of servers interconnected with high bandwidth switches. Current data center, based on general purpose processor, consume excessive power while their utilization is quite low. Hardware accelerators can provide high energy efficiency for many cloud applications but they lack the programming efficiency of processors. In the last few years, there several efforts for the efficient deployment of hardware accelerators in the data centers. This paper presents a thorough survey of the frameworks for the efficient utilization of the FPGAs in the data centers. Furthermore it presents the hardware accelerators that have been implemented for the most widely used cloud computing applications. Furthermore, the paper provides a qualitative categorization and comparison of the proposed schemes based on their main features such as speedup and energy efficiency.",
"title": ""
},
{
"docid": "539dc7f8657f83ac2ae9590a283c7321",
"text": "This paper presents a review on Optical Character Recognition Techniques. Optical Character recognition (OCR) is a technology that allows machines to automatically recognize the characters through an optical mechanism. OCR can be described as Mechanical or electronic conversion of scanned images where images can be handwritten, typewritten or printed text. It converts the images into machine-encoded text that can be used in machine translation, text-to-speech and text mining. Various techniques are available for character recognition in optical character recognition system. This material can be useful for the researchers who wish to work in character recognition area.",
"title": ""
}
] |
scidocsrr
|
c1903fe6bb416daece23dc45685eb301
|
Theory and Experiments on Vector Quantized Autoencoders
|
[
{
"docid": "d8e32dfbe629d374e7fd5e9571c20cd4",
"text": "Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13× fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.",
"title": ""
},
{
"docid": "b0bd9a0b3e1af93a9ede23674dd74847",
"text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.",
"title": ""
},
{
"docid": "0ce4a0dfe5ea87fb87f5d39b13196e94",
"text": "Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.",
"title": ""
},
{
"docid": "f32e8f005d277652fe691216e96e7fd8",
"text": "PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup O(log N) sampling instead of O(N) enabling the practical generation of 512× 512 images. We evaluate the model on class-conditional image generation, text-toimage synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.",
"title": ""
},
{
"docid": "93f89a636828df50dfe48ffa3e868ea6",
"text": "The reparameterization trick enables the optimization of large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack continuous reparameterizations due to the discontinuous nature of discrete states. In this work we introduce concrete random variables – continuous relaxations of discrete random variables. The concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-likelihood of latent stochastic nodes) on the corresponding discrete graph. We demonstrate their effectiveness on density estimation and structured prediction tasks using neural networks.",
"title": ""
}
] |
[
{
"docid": "80f1c8b99de81b9b1220b4178d126042",
"text": "Indigenous groups offer alternative knowledge and perspectives based on their own locally developed practices of resource use. We surveyed the international literature to focus on the role of Traditional Ecological Knowledge in monitoring, responding to, and managing ecosystem processes and functions, with special attention to ecological resilience. Case studies revealed that there exists a diversity of local or traditional practices for ecosystem management. These include multiple species management, resource rotation, succession management, landscape patchiness management, and other ways of responding to and managing pulses and ecological surprises. Social mechanisms behind these traditional practices include a number of adaptations for the generation, accumulation, and transmission of knowledge; the use of local institutions to provide leaders/stewards and rules for social regulation; mechanisms for cultural internalization of traditional practices; and the development of appropriate world views and cultural values. Some traditional knowledge and management systems were characterized by the use of local ecological knowledge to interpret and respond to feedbacks from the environment to guide the direction of resource management. These traditional systems had certain similarities to adaptive management with its emphasis on feedback learning, and its treatment of uncertainty and unpredictability intrinsic to all ecosystems.",
"title": ""
},
{
"docid": "1bef3b336a4d7328378b93f1a6805593",
"text": "[摘要] 目的 调查军队飞行人员的睡眠习惯、常见睡眠问题、白天嗜睡程度和飞行前夜睡眠情况。方法 采取 整群随机抽样方法对1380名军队飞行人员进行问卷调查,对其中1328份有效问卷进行统计分析。结果 飞行人员平均 夜间睡眠时间6.99h,24h平均睡眠时间8.10h。4.10%的飞行人员24h睡眠时间不足6h。中重度打鼾发生率22.80%,易醒 或(和)早醒和入睡困难≥3次/周的发生率为7.65%和5.81%,夜尿患病率4.03%。平均ESS评分5.59±4.40分,以≥11分评 估嗜睡发生率为14.99%。飞行中困倦发生率14.20%。62.53%的飞行人员飞行前夜较平时睡眠发生变化。结论 飞行人 员中存在着多种睡眠问题,并且飞行前夜睡眠可发生变化,这些都可能影响飞行安全,应进行研究和干预。 [关键词] 问卷调查;数据收集;飞行;睡眠;嗜睡 [中图分类号] R856.74 [文献标志码] A [文章编号] 0577-7402(2012)02-0141-05",
"title": ""
},
{
"docid": "9ec10477ba242675c8bad3a1ca335b38",
"text": "PURPOSE\nThis paper explores the importance of family daily routines and rituals for the family's functioning and sense of identity.\n\n\nMETHODS\nThe findings of this paper are derived from an analysis of the morning routines of 40 families with children with disabilities in the United States and Canada. The participants lived in urban and rural areas. Forty of the 49 participants were mothers and the majority of the families were of European descent. Between one and four interviews were conducted with each participant. Topics included the family's story, daily routines, and particular occupations. Data on the morning routines of the families were analyzed for order and affective and symbolic meaning using a narrative approach.\n\n\nFINDINGS\nThe findings are presented as narratives of morning activities in five families. These narratives are examples for rituals, routines, and the absence of a routine. Rituals are discussed in terms of their affective and symbolic qualities, routines are discussed in terms of the order they give to family life, whereas the lack of family routine is discussed in terms of lack of order in the family.\n\n\nCONCLUSIONS\nFamily routines and rituals are organizational and meaning systems that may affect family's ability to adapt them.",
"title": ""
},
{
"docid": "65727ea9121735b1f8abf0d9631d402d",
"text": "BACKGROUND\nAttention-deficit/hyperactivity disorder (ADHD) is associated with difficulties in learning, behaviour and psychosocial adjustment that persist into adulthood. Decreased omega-3 fatty acids and increased inflammation or oxidative stress may contribute to neuro-developmental and psychiatric disorders such as ADHD. The aim of this study was to determine the effect of n-3 supplementation on hyperactivity, oxidative stress and inflammatory mediators in children with ADHD.\n\n\nMETHODS\nIn this double blind study, 103 children (6-12 years) with ADHD receiving maintenance therapy were assigned randomly into two groups. The n-3 group received n-3 fatty acids (635 mg eicosapentaenoic acid (EPA), 195 mg docosahexaenoic acid (DHA)), and the placebo group received olive oil capsules which were visually similar to the n-3 capsules. The duration of supplementation was 8 weeks. Plasma C-reactive protein (CRP), interleukin-6 (IL-6) and the activity of glutathione reductase (GR), catalase (CAT) and superoxide dismutase (SOD) were determined before and after the intervention. Likewise the Conners' Abbreviated Questionnaires (ASQ-P) was applied.\n\n\nRESULTS\nAfter 8-week intervention, a significant reduction was observed in the levels of CRP ( P < 0.05, 95% CI = 0.72-2.02) and IL-6 (P < 0.001, 95% CI = 1.93-24.33) in the n-3 group. There was also a significant increase in activity of SOD and GR (P < 0.001). A significant improvement was seen in the ASQ-P scores in the n-3 group (P < 005).\n\n\nCONCLUSION\nEight weeks of EPA and DHA supplementation decreased plasma inflammatory mediators and oxidative stress in the children with ADHD. These results suggest that n-3 fatty acid supplementation may offer a safe and efficacious treatment for children with ADHD.",
"title": ""
},
{
"docid": "0c4f02b3b361d60da1aec0f0c100dcf9",
"text": "Architecture Compliance Checking (ACC) is an approach to verify the conformance of implemented program code to high-level models of architectural design. ACC is used to prevent architectural erosion during the development and evolution of a software system. Static ACC, based on static software analysis techniques, focuses on the modular architecture and especially on rules constraining the modular elements. A semantically rich modular architecture (SRMA) is expressive and may contain modules with different semantics, like layers and subsystems, constrained by rules of different types. To check the conformance to an SRMA, ACC-tools should support the module and rule types used by the architect. This paper presents requirements regarding SRMA support and an inventory of common module and rule types, on which basis eight commercial and non-commercial tools were tested. The test results show large differences between the tools, but all could improve their support of SRMA, what might contribute to the adoption of ACC in practice.",
"title": ""
},
{
"docid": "b12a7edfbd65a734c290a873206bd2c2",
"text": "The autofocus problem in synthetic aperture radar imaging amounts to estimating unknown phase errors caused by unknown platform or target motion. At the heart of three state-of-the-art autofocus algorithms, namely, phase gradient autofocus, multichannel autofocus (MCA), and Fourier-domain multichannel autofocus (FMCA), is the solution of a constant modulus quadratic program (CMQP). Currently, these algorithms solve a CMQP by using an eigenvalue relaxation approach. We propose an alternative relaxation approach based on semidefinite programming, which has recently attracted considerable attention in other signal processing problems. Experimental results show that our proposed methods provide promising performance improvements for MCA and FMCA through an increase in computational complexity.",
"title": ""
},
{
"docid": "b51d531c2ff106124f96a4287e466b90",
"text": "Detecting buildings from very high resolution (VHR) aerial and satellite images is extremely useful in map making, urban planning, and land use analysis. Although it is possible to manually locate buildings from these VHR images, this operation may not be robust and fast. Therefore, automated systems to detect buildings from VHR aerial and satellite images are needed. Unfortunately, such systems must cope with major problems. First, buildings have diverse characteristics, and their appearance (illumination, viewing angle, etc.) is uncontrolled in these images. Second, buildings in urban areas are generally dense and complex. It is hard to detect separate buildings from them. To overcome these difficulties, we propose a novel building detection method using local feature vectors and a probabilistic framework. We first introduce four different local feature vector extraction methods. Extracted local feature vectors serve as observations of the probability density function (pdf) to be estimated. Using a variable-kernel density estimation method, we estimate the corresponding pdf. In other words, we represent building locations (to be detected) in the image as joint random variables and estimate their pdf. Using the modes of the estimated density, as well as other probabilistic properties, we detect building locations in the image. We also introduce data and decision fusion methods based on our probabilistic framework to detect building locations. We pick certain crops of VHR panchromatic aerial and Ikonos satellite images to test our method. We assume that these crops are detected using our previous urban region detection method. Our test images are acquired by two different sensors, and they have different spatial resolutions. Also, buildings in these images have diverse characteristics. Therefore, we can test our methods on a diverse data set. Extensive tests indicate that our method can be used to automatically detect buildings in a robust and fast manner in Ikonos satellite and our aerial images.",
"title": ""
},
{
"docid": "cb5f866f2977c7c0d66d75bea1094375",
"text": "A single-switch nonisolated dc/dc converter for a stand-alone photovoltaic (PV)-battery-powered pump system is proposed in this paper. The converter is formed by combining a buck converter with a buck-boost converter. This integration also resulted in reduced repeated power processing, hence improving the conversion efficiency. With only a single transistor, the converter is able to perform three tasks simultaneously, namely, maximum-power-point tracking (MPPT), battery charging, and driving the pump at constant flow rate. To achieve these control objectives, the two inductors operate in different modes such that variable switching frequency control and duty cycle control can be used to manage MPPT and output voltage regulation, respectively. The battery in the converter provides a more steady dc-link voltage as compared to that of a conventional single-stage converter and hence mitigates the high voltage stress problem. Experimental results of a 14-W laboratory prototype converter with a maximum efficiency of 92% confirmed the performance of the proposed converter when used in a PV-battery pump system.",
"title": ""
},
{
"docid": "e5667a65bc628b93a1d5b0e37bfb8694",
"text": "The problem of determining whether an object is in motion, irrespective of camera motion, is far from being solved. We address this challenging task by learning motion patterns in videos. The core of our approach is a fully convolutional network, which is learned entirely from synthetic video sequences, and their ground-truth optical flow and motion segmentation. This encoder-decoder style architecture first learns a coarse representation of the optical flow field features, and then refines it iteratively to produce motion labels at the original high-resolution. We further improve this labeling with an objectness map and a conditional random field, to account for errors in optical flow, and also to focus on moving things rather than stuff. The output label of each pixel denotes whether it has undergone independent motion, i.e., irrespective of camera motion. We demonstrate the benefits of this learning framework on the moving object segmentation task, where the goal is to segment all objects in motion. Our approach outperforms the top method on the recently released DAVIS benchmark dataset, comprising real-world sequences, by 5.6%. We also evaluate on the Berkeley motion segmentation database, achieving state-of-the-art results.",
"title": ""
},
{
"docid": "45e35b5dc8f89c083df8431051092704",
"text": "Document-based Question Answering aims to compute the similarity or relevance between two texts: question and answer. It is a typical and core task and considered as a touchstone of natural language understanding. In this article, we present a convolutional neural network based architecture to learn feature representations of each questionanswer pair and compute its match score. By taking the interaction and attention between question and answer into consideration, as well as word overlap indices, the empirical study on Chinese Open-Domain Question Answering (DBQA) Task (document-based) demonstrates the efficacy of the proposed model, which achieves the best result on NLPCC-ICCPOL 2016 Shared Task on DBQA.",
"title": ""
},
{
"docid": "5f4fbc9c9ee1f5fab94e6a0ead85a884",
"text": "We show that discourse structure, as defined by Rhetorical Structure Theory and provided by an existing discourse parser, benefits text categorization. Our approach uses a recursive neural network and a newly proposed attention mechanism to compute a representation of the text that focuses on salient content, from the perspective of both RST and the task. Experiments consider variants of the approach and illustrate its strengths and weaknesses.",
"title": ""
},
{
"docid": "34459005eaf3a5e5bc9e467ecdf9421c",
"text": "for recovering sparse solutions to an undetermined system of linear equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a first-order iterative method called “shrinkage” yields an estimate of the subset of components of x likely to be nonzero in an optimal solution. Restricting the decision variables x to this subset and fixing their signs at their current values reduces the l1-norm ‖x‖1 to a linear function of x. The resulting subspace problem, which involves the minimization of a smaller and smooth quadratic function, is solved in the second phase. Our code FPC AS embeds this basic two-stage algorithm in a continuation (homotopy) approach by assigning a decreasing sequence of values to μ. This code exhibits state-of-the-art performance both in terms of its speed and its ability to recover sparse signals. It can even recover signals that are not as sparse as required by current compressive sensing theory.",
"title": ""
},
{
"docid": "f3a838d6298c8ae127e548ba62e872eb",
"text": "Plasmodium falciparum resistance to artemisinins, the most potent and fastest acting anti-malarials, threatens malaria elimination strategies. Artemisinin resistance is due to mutation of the PfK13 propeller domain and involves an unconventional mechanism based on a quiescence state leading to parasite recrudescence as soon as drug pressure is removed. The enhanced P. falciparum quiescence capacity of artemisinin-resistant parasites results from an increased ability to manage oxidative damage and an altered cell cycle gene regulation within a complex network involving the unfolded protein response, the PI3K/PI3P/AKT pathway, the PfPK4/eIF2α cascade and yet unidentified transcription factor(s), with minimal energetic requirements and fatty acid metabolism maintained in the mitochondrion and apicoplast. The detailed study of these mechanisms offers a way forward for identifying future intervention targets to fend off established artemisinin resistance.",
"title": ""
},
{
"docid": "4f2986b922e09df53aa7662bf58b1429",
"text": "Two semi-supervised feature extraction methods are proposed for electroencephalogram (EEG) classification. They aim to alleviate two important limitations in brain–computer interfaces (BCIs). One is on the requirement of small training sets owing to the need of short calibration sessions. The second is the time-varying property of signals, e.g., EEG signals recorded in the training and test sessions often exhibit different discriminant features. These limitations are common in current practical applications of BCI systems and often degrade the performance of traditional feature extraction algorithms. In this paper, we propose two strategies to obtain semi-supervised feature extractors by improving a previous feature extraction method extreme energy ratio (EER). The two methods are termed semi-supervised temporally smooth EER and semi-supervised importance weighted EER, respectively. The former constructs a regularization term on the preservation of the temporal manifold of test samples and adds this as a constraint to the learning of spatial filters. The latter defines two kinds of weights by exploiting the distribution information of test samples and assigns the weights to training data points and trials to improve the estimation of covariance matrices. Both of these two methods regularize the spatial filters to make them more robust and adaptive to the test sessions. Experimental results on data sets from nine subjects with comparisons to the previous EER demonstrate their better capability for classification.",
"title": ""
},
{
"docid": "7ea89697894cb9e0da5bfcebf63be678",
"text": "This paper develops a frequency-domain iterative machine learning (IML) approach for output tracking. Frequency-domain iterative learning control allows bounded noncausal inversion of system dynamics and is, therefore, applicable to nonminimum phase systems. The model used in the frequency-domain control update can be obtained from the input–output data acquired during the iteration process. However, such data-based approaches can have challenges if the noise-to-output-signal ratio is large. The main contribution of this paper is the use of kernel-based machine learning during the iterations to estimate both the model (and its inverse) for the control update, as well as the model uncertainty needed to establish bounds on the iteration gain for ensuring convergence. Another contribution is the proposed use of augmented inputs with persistency of excitation to promote learning of the model during iterations. The improved model can be used to better infer the inverse input resulting in lower initial error for new output trajectories. The proposed IML approach with the augmented input is illustrated with simulations for a benchmark nonminimum phase example.",
"title": ""
},
{
"docid": "04bc7757006176cd1307874d19b11dc6",
"text": "AIMS\nCompare vaginal resting pressure (VRP), pelvic floor muscle (PFM) strength, and endurance in women with and without diastasis recti abdominis at gestational week 21 and at 6 weeks, 6 months, and 12 months postpartum. Furthermore, to compare prevalence of urinary incontinence (UI) and pelvic organ prolapse (POP) in the two groups at the same assessment points.\n\n\nMETHODS\nThis is a prospective cohort study following 300 nulliparous pregnant women giving birth at a public university hospital. VRP, PFM strength, and endurance were measured with vaginal manometry. ICIQ-UI-SF questionnaire and POP-Q were used to assess UI and POP. Diastasis recti abdominis was diagnosed with palpation of ≥2 fingerbreadths 4.5 cm above, at, or 4.5 cm below the umbilicus.\n\n\nRESULTS\nAt gestational week 21 women with diastasis recti abdominis had statistically significant greater VRP (mean difference 3.06 cm H2 O [95%CI: 0.70; 5.42]), PFM strength (mean difference 5.09 cm H2 O [95%CI: 0.76; 9.42]) and PFM muscle endurance (mean difference 47.08 cm H2 O sec [95%CI: 15.18; 78.99]) than women with no diastasis. There were no statistically significant differences between women with and without diastasis in any PFM variables at 6 weeks, 6 months, and 12 months postpartum. No significant difference was found in prevalence of UI in women with and without diastasis at any assessment points. Six weeks postpartum 15.9% of women without diastasis had POP versus 4.1% in the group with diastasis (P = 0.001).\n\n\nCONCLUSIONS\nWomen with diastasis were not more likely to have weaker PFM or more UI or POP. Neurourol. Urodynam. 36:716-721, 2017. © 2016 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "20d5147f67fccce9ba3290793bf4d9b5",
"text": "Correspondence: David E Vance NB 456, 1701 University Boulevard, University of Alabama at Birmingham, Birmingham, AL 35294-1210, USA Tel +1 205 934 7589 Fax +1 205 996 7183 Email [email protected] Abstract: The ability to critically evaluate the merits of a quantitative design research article is a necessary skill for practitioners and researchers of all disciplines, including nursing, in order to judge the integrity and usefulness of the evidence and conclusions made in an article. In general, this skill is automatic for many practitioners and researchers who already possess a good working knowledge of research methodology, including: hypothesis development, sampling techniques, study design, testing procedures and instrumentation, data collection and data management, statistics, and interpretation of findings. For graduate students and junior faculty who have yet to master these skills, completing a formally written article critique can be a useful process to hone such skills. However, a fundamental knowledge of research methods is still needed in order to be successful. Because there are few published examples of critique examples, this article provides the practical points of conducting a formally written quantitative research article critique while providing a brief example to demonstrate the principles and form.",
"title": ""
},
{
"docid": "f9afcc134abda1c919cf528cbc975b46",
"text": "Multimodal question answering in the cultural heritage domain allows visitors to museums, landmarks or other sites to ask questions in a more natural way. This in turn provides better user experiences. In this paper, we propose the construction of a golden standard dataset dedicated to aiding research into multimodal question answering in the cultural heritage domain. The dataset, soon to be released to the public, contains multimodal content about the fascinating old-Egyptian Amarna period, including images of typical artworks, documents about these artworks (containing images) and over 800 multimodal queries integrating visual and textual questions. The multimodal questions and related documents are all in English. The multimodal questions are linked to relevant paragraphs in the related documents that contain the answer to the multimodal query.",
"title": ""
},
{
"docid": "b50ea06c20fb22d7060f08bc86d9d6ca",
"text": "The advent of the Social Web has provided netizens with new tools for creating and sharing, in a time- and cost-efficient way, their contents, ideas, and opinions with virtually the millions of people connected to the World Wide Web. This huge amount of information, however, is mainly unstructured as specifically produced for human consumption and, hence, it is not directly machine-processable. In order to enable a more efficient passage from unstructured information to structured data, aspect-based opinion mining models the relations between opinion targets contained in a document and the polarity values associated with these. Because aspects are often implicit, however, spotting them and calculating their respective polarity is an extremely difficult task, which is closer to natural language understanding rather than natural language processing. To this end, Sentic LDA exploits common-sense reasoning to shift LDA clustering from a syntactic to a semantic level. Rather than looking at word co-occurrence frequencies, Sentic LDA leverages on the semantics associated with words and multi-word expressions to improve clustering and, hence, outperform state-of-the-art techniques for aspect extraction.",
"title": ""
},
{
"docid": "a52b452f1fb7e1b48a1f3f50ea8a95a7",
"text": "Domain Adaptation (DA) techniques aim at enabling machine learning methods learn effective classifiers for a “target” domain when the only available training data belongs to a different “source” domain. In this extended abstract we briefly describe a new DA method called Distributional Correspondence Indexing (DCI) for sentiment classification. DCI derives term representations in a vector space common to both domains where each dimension reflects its distributional correspondence to a pivot, i.e., to a highly predictive term that behaves similarly across domains. The experiments we have conducted show that DCI obtains better performance than current state-of-theart techniques for cross-lingual and cross-domain sentiment classification.",
"title": ""
}
] |
scidocsrr
|
c83097b29942dc4e878c66424f47a918
|
An intelligent home networking system
|
[
{
"docid": "be99f6ba66d573547a09d3429536049e",
"text": "With the development of sensor, wireless mobile communication, embedded system and cloud computing, the technologies of Internet of Things have been widely used in logistics, Smart Meter, public security, intelligent building and so on. Because of its huge market prospects, Internet of Things has been paid close attention by several governments all over the world, which is regarded as the third wave of information technology after Internet and mobile communication network. Bridging between wireless sensor networks with traditional communication networks or Internet, IOT Gateway plays an important role in IOT applications, which facilitates the seamless integration of wireless sensor networks and mobile communication networks or Internet, and the management and control with wireless sensor networks. In this paper, we proposed an IOT Gateway system based on Zigbee and GPRS protocols according to the typical IOT application scenarios and requirements from telecom operators, presented the data transmission between wireless sensor networks and mobile communication networks, protocol conversion of different sensor network protocols, and control functionalities for sensor networks, and finally gave an implementation of prototyping system and system validation.",
"title": ""
}
] |
[
{
"docid": "2323e926fb6aab6984be3e8537e17eef",
"text": "In this paper, a novel method is proposed for Facial Expression Recognition (FER) using dictionary learning to learn both identity and expression dictionaries simultaneously. Accordingly, an automatic and comprehensive feature extraction method is proposed. The proposed method accommodates real-valued scores to a probability of what percent of the given Facial Expression (FE) is present in the input image. To this end, a dual dictionary learning method is proposed to learn both regression and feature dictionaries for FER. Then, two regression classification methods are proposed using a regression model formulated based on dictionary learning and two known classification methods including Sparse Representation Classification (SRC) and Collaborative Representation Classification (CRC). Convincing results are acquired for FER on the CK+, CK, MMI and JAFFE image databases compared to several state-of-the-arts. Also, promising results are obtained from evaluating the proposed method for generalization on other databases. The proposed method not only demonstrates excellent performance by obtaining high accuracy on all four databases but also outperforms other state-of-the-art approaches.",
"title": ""
},
{
"docid": "83d0dc6c2ad117cabbd7cd80463dbe43",
"text": "Internet addiction is a new and often unrecognized clinical disorder that can cause relational, occupational, and social problems. Pathological gambling is compared to problematic internet use because of overlapping diagnostic criteria. As computers are used with great frequency, detection and diagnosis of internet addiction is often difficult. Symptoms of a possible problem may be masked by legitimate use of the internet. Clinicians may overlook asking questions about computer use. To help clinicians identify internet addiction in practice, this paper provides an overview of the problem and the various subtypes that have been identified. The paper reviews conceptualizations of internet addiction, various forms that the disorder takes, and treatment considerations for working with this emergent client population.",
"title": ""
},
{
"docid": "bf23a6fcf1a015d379dee393a294761c",
"text": "This study addresses the inconsistency of contemporary literature on defining the link between leadership styles and personality traits. The plethora of literature on personality traits has culminated into symbolic big five personality dimensions but there is still a dearth of research on developing representative leadership styles despite the perennial fascination with the subject. Absence of an unequivocal model for developing representative styles in conjunction with the use of several non-mutually exclusive existing leadership styles has created a discrepancy in developing a coherent link between leadership and personality. This study sums up 39 different styles of leadership into five distinct representative styles on the basis of similar theoretical underpinnings and common characteristics to explore how each of these five representative leadership style relates to personality dimensions proposed by big five model.",
"title": ""
},
{
"docid": "ea3fd6ece19949b09fd2f5f2de57e519",
"text": "Multiple myeloma is the second most common hematologic malignancy. The treatment of this disease has changed considerably over the last two decades with the introduction to the clinical practice of novel agents such as proteasome inhibitors and immunomodulatory drugs. Basic research efforts towards better understanding of normal and missing immune surveillence in myeloma have led to development of new strategies and therapies that require the engagement of the immune system. Many of these treatments are under clinical development and have already started providing encouraging results. We, for the second time in the last two decades, are about to witness another shift of the paradigm in the management of this ailment. This review will summarize the major approaches in myeloma immunotherapies.",
"title": ""
},
{
"docid": "be89ea7764b6a22ce518bac03a8c7540",
"text": "In remote, rugged or sensitive environments ground based mapping for condition assessment of species is both time consuming and potentially destructive. The application of photogrammetric methods to generate multispectral imagery and surface models based on UAV imagery at appropriate temporal and spatial resolutions is described. This paper describes a novel method to combine processing of NIR and visible image sets to produce multiband orthoimages and DEM models from UAV imagery with traditional image location and orientation uncertainties. This work extends the capabilities of recently developed commercial software (Pix4UAV from Pix4D) to show that image sets of different modalities (visible and NIR) can be automatically combined to generate a 4 band orthoimage. Reconstruction initially uses all imagery sets (NIR and visible) to ensure all images are in the same reference frame such that a 4-band orthoimage can be created. We analyse the accuracy of this automatic process by using ground control points and an evaluation on the matching performance between images of different modalities is shown. By combining sub-decimetre multispectral imagery with high spatial resolution surface models and ground based observation it is possible to generate detailed maps of vegetation assemblages at the species level. Potential uses with other conservation monitoring are discussed.",
"title": ""
},
{
"docid": "2fbcd34468edf53ee08e0a76a048c275",
"text": "Recently, the introduction of the generative adversarial network (GAN) and its variants has enabled the generation of realistic synthetic samples, which has been used for enlarging training sets. Previous work primarily focused on data augmentation for semi-supervised and supervised tasks. In this paper, we instead focus on unsupervised anomaly detection and propose a novel generative data augmentation framework optimized for this task. In particular, we propose to oversample infrequent normal samples - normal samples that occur with small probability, e.g., rare normal events. We show that these samples are responsible for false positives in anomaly detection. However, oversampling of infrequent normal samples is challenging for real-world high-dimensional data with multimodal distributions. To address this challenge, we propose to use a GAN variant known as the adversarial autoencoder (AAE) to transform the high-dimensional multimodal data distributions into low-dimensional unimodal latent distributions with well-defined tail probability. Then, we systematically oversample at the 'edge' of the latent distributions to increase the density of infrequent normal samples. We show that our oversampling pipeline is a unified one: it is generally applicable to datasets with different complex data distributions. To the best of our knowledge, our method is the first data augmentation technique focused on improving performance in unsupervised anomaly detection. We validate our method by demonstrating consistent improvements across several real-world datasets.",
"title": ""
},
{
"docid": "914ffd6fd4ef493c9b2c67d89b8e2d18",
"text": "PET/CT medical image fusion has important clinical significance. As the multiwavelet transform has several particular advantages in comparison with scalar wavelets on image processing, this paper proposes a medical image fusion algorithm based on multiwavelet transform after in-depth study of wavelet theory. The algorithm achieves PET/CT fusion with wavelet coefficients fusion method. Experimental results show that fusion image combines information of the source images, adds more details and texture information, and achieves a good fusion result. Based on the proposed algorithm, we can obtain the best result when using gradient fusion in the low-frequency part and classification fusion in the high-frequency part.",
"title": ""
},
{
"docid": "f8d554c215cc40ddc71171b3f266c43a",
"text": "Nowadays, Edge computing allows to push the application intelligence at the boundaries of a network in order to get high-performance processing closer to both data sources and end-users. In this scenario, the Horizon 2020 BEACON project - enabling federated Cloud-networking - can be used to setup Fog computing environments were applications can be deployed in order to instantiate Edge computing applications. In this paper, we focus on the deployment orchestration of Edge computing distributed services on such fog computing environments. We assume that a distributed service is composed of many microservices. Users, by means of geolocation deployment constrains can select regions in which microservices will be deployed. Specifically, we present an Orchestration Broker that starting from an ad-hoc OpenStack-based Heat Orchestraton Template (HOT) service manifest of an Edge computing distributed service produces several HOT microservice manifests including the the deployment instruction for each involved Fog computing node. Experiments prove the goodness of our approach.",
"title": ""
},
{
"docid": "4bf7ad74cb51475e7e20f32aa4b767d9",
"text": "function parameters = main(); % File main.m, created by Eugene Izhikevich. August 28, 2001 % Uses voltage-clamp data from N voltage-step experiments to % determine (in)activation parameters of a transient current. % Data provided by user: global v times current E p q load v.dat % N by 2 matrix of voltage steps % [from, to; from, to;...] load times.dat % Time mesh of the voltage-clamped data load current.dat % Matrix of the current values. E = 50; % Reverse potential p = 3; % The number of activation gates q = 1; % The number of inactivation gates % Guess of initial values of parameters % activation V_1/2 k V_max sigma C_amp C_base par(1:6) = [ -50 20 -40 30 0.5 0.1]; % inactivation V_1/2 k V_max sigma C_amp C_base par(7:12) =[ -60 -5 -70 20 5 1]; par(13) = 1; % Maximal conductance g_max % If E, p, or q are not known, add par(14)=60, etc. % and modify test.m parameters = fmins(’test’,par);",
"title": ""
},
{
"docid": "9c61e4971829a799b6e979f1b6d69387",
"text": "This work examines humanoid social robots in Japan and the North America with a view to comparing and contrasting the projects cross culturally. In North America, I look at the work of Cynthia Breazeal at the Massachusetts Institute of Technology and her sociable robot project: Kismet. In Japan, at the Osaka University, I consider the project of Hiroshi Ishiguro: Repliée-Q2. I first distinguish between utilitarian and affective social robots. Then, drawing on published works of Breazeal and Ishiguro I examine the proposed vision of each project. Next, I examine specific characteristics (embodied and social intelligence, morphology and aesthetics, and moral equivalence) of Kismet and Repliée with a view to comparing the underlying concepts associated with each. These features are in turn connected to the societal preconditions of robots generally. Specifically, the role that history of robots, theology/spirituality, and popular culture plays in the reception and attitude toward robots is considered.",
"title": ""
},
{
"docid": "21c3f6d61eeeb4df1bdb500f388f71f3",
"text": "Status of This Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the \"Internet Official Protocol Standards\" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Abstract The Extensible Authentication Protocol (EAP), defined in RFC 3748, enables extensible network access authentication. This document specifies the EAP key hierarchy and provides a framework for the transport and usage of keying material and parameters generated by EAP authentication algorithms, known as \"methods\". It also provides a detailed system-level security analysis, describing the conditions under which the key management guidelines described in RFC 4962 can be satisfied.",
"title": ""
},
{
"docid": "00b98536f0ecd554442a67fb31f77f4c",
"text": "We use a large, nationally-representative sample of working-age adults to demonstrate that personality (as measured by the Big Five) is stable over a four-year period. Average personality changes are small and do not vary substantially across age groups. Intra-individual personality change is generally unrelated to experiencing adverse life events and is unlikely to be economically meaningful. Like other non-cognitive traits, personality can be modeled as a stable input into many economic decisions. JEL classi cation: J3, C18.",
"title": ""
},
{
"docid": "85c4c0ffb224606af6bc3af5411d31ca",
"text": "Sequence-to-sequence models with attention have been successful for a variety of NLP problems, but their speed does not scale well for tasks with long source sequences such as document summarization. We propose a novel coarse-to-fine attention model that hierarchically reads a document, using coarse attention to select top-level chunks of text and fine attention to read the words of the chosen chunks. While the computation for training standard attention models scales linearly with source sequence length, our method scales with the number of top-level chunks and can handle much longer sequences. Empirically, we find that while coarse-tofine attention models lag behind state-ofthe-art baselines, our method achieves the desired behavior of sparsely attending to subsets of the document for generation.",
"title": ""
},
{
"docid": "eda6795cb79e912a7818d9970e8ca165",
"text": "This study aimed to examine the relationship between maximum leg extension strength and sprinting performance in youth elite male soccer players. Sixty-three youth players (12.5 ± 1.3 years) performed 5 m, flying 15 m and 20 m sprint tests and a zigzag agility test on a grass field using timing gates. Two days later, subjects performed a one-repetition maximum leg extension test (79.3 ± 26.9 kg). Weak to strong correlations were found between leg extension strength and the time to perform 5 m (r = -0.39, p = 0.001), flying 15 m (r = -0.72, p < 0.001) and 20 m (r = -0.67, p < 0.001) sprints; between body mass and 5 m (r = -0.43, p < 0.001), flying 15 m (r = -0.75, p < 0.001), 20 m (r = -0.65, p < 0.001) sprints and agility (r =-0.29, p < 0.001); and between height and 5 m (r = -0.33, p < 0.01) and flying 15 m (r = -0.74, p < 0.001) sprints. Our results show that leg muscle strength and anthropometric variables strongly correlate with sprinting ability. This suggests that anthropometric characteristics should be considered to compare among youth players, and that youth players should undergo strength training to improve running speed.",
"title": ""
},
{
"docid": "e50842fc8438af7fe6ce4b6d9a5439a7",
"text": "OBJECTIVE\nTimely recognition and optimal management of atherogenic dyslipidemia (AD) and residual vascular risk (RVR) in family medicine.\n\n\nBACKGROUND\nThe global increase of the incidence of obesity is accompanied by an increase in the incidence of many metabolic and lipoprotein disorders, in particular AD, as an typical feature of obesity, metabolic syndrome, insulin resistance and diabetes type 2. AD is an important factor in cardio metabolic risk, and is characterized by a lipoprotein profile with low levels of high-density lipoprotein (HDL), high levels of triglycerides (TG) and high levels of low-density lipoprotein (LDL) cholesterol. Standard cardiometabolic risk assessment using the Framingham risk score and standard treatment with statins is usually sufficient, but not always that effective, because it does not reduce RVR that is attributed to elevated TG and reduced HDL cholesterol. RVR is subject to reduction through lifestyle changes or by pharmacological interventions. In some studies it was concluded that dietary interventions should aim to reduce the intake of calories, simple carbohydrates and saturated fats, with the goal of reaching cardiometabolic suitability, rather than weight reduction. Other studies have found that the reduction of carbohydrates in the diet or weight loss can alleviate AD changes, while changes in intake of total or saturated fat had no significant influence. In our presented case, a lifestyle change was advised as a suitable diet with reduced intake of carbohydrates and a moderate physical activity of walking for at least 180 minutes per week, with an recommendation for daily intake of calories alignment with the total daily (24-hour) energy expenditure (24-EE), depending on the degree of physical activity, type of food and the current health condition. Such lifestyle changes together with combined medical therapy with Statins, Fibrates and Omega-3 fatty acids, resulted in significant improvement in atherogenic lipid parameters.\n\n\nCONCLUSION\nUnsuitable atherogenic nutrition and insufficient physical activity are the new risk factors characteristic for AD. Nutritional interventions such as diet with reduced intake of carbohydrates and calories, moderate physical activity, combined with pharmacotherapy can improve atherogenic dyslipidemic profile and lead to loss of weight. Although one gram of fat release twice more kilo calories compared to carbohydrates, carbohydrates seems to have a greater atherogenic potential, which should be explored in future.",
"title": ""
},
{
"docid": "480c8d16f3e58742f0164f8c10a206dd",
"text": "Dyna is an architecture for reinforcement learning agents that interleaves planning, acting, and learning in an online setting. This architecture aims to make fuller use of limited experience to achieve better performance with fewer environmental interactions. Dyna has been well studied in problems with a tabular representation of states, and has also been extended to some settings with larger state spaces that require function approximation. However, little work has studied Dyna in environments with high-dimensional state spaces like images. In Dyna, the environment model is typically used to generate one-step transitions from selected start states. We applied one-step Dyna to several games from the Arcade Learning Environment and found that the model-based updates offered surprisingly little benefit, even with a perfect model. However, when the model was used to generate longer trajectories of simulated experience, performance improved dramatically. This observation also holds when using a model that is learned from experience; even though the learned model is flawed, it can still be used to accelerate learning.",
"title": ""
},
{
"docid": "f5b9cde4b7848f803b3e742298c92824",
"text": "For many years, analysis of short chain fatty acids (volatile fatty acids, VFAs) has been routinely used in identification of anaerobic bacteria. In numerous scientific papers, the fatty acids between 9 and 20 carbons in length have also been used to characterize genera and species of bacteria, especially nonfermentative Gram negative organisms. With the advent of fused silica capillary columns (which allows recovery of hydroxy acids and resolution of many isomers), it has become practical to use gas chromatography of whole cell fatty acid methyl esters to identify a wide range of organisms.",
"title": ""
},
{
"docid": "4b951d88ad9c3ca0b14b88cce1a34b14",
"text": "Burrows’s Delta is the most established measure for stylometric difference in literary authorship attribution. Several improvements on the original Delta have been proposed. However, a recent empirical study showed that none of the proposed variants constitute a major improvement in terms of authorship attribution performance. With this paper, we try to improve our understanding of how and why these text distance measures work for authorship attribution. We evaluate the effects of standardization and vector normalization on the statistical distributions of features and the resulting text clustering quality. Furthermore, we explore supervised selection of discriminant words as a procedure for further improving authorship attribution.",
"title": ""
},
{
"docid": "a6e71e4be58c51b580fcf08e9d1a100a",
"text": "This dissertation is concerned with the processing of high velocity text using event processing means. It comprises a scientific approach for combining the area of information filtering and event processing, in order to analyse fast and voluminous streams of text. In order to be able to process text streams within event driven means, an event reference model was developed that allows for the conversion of unstructured or semi-structured text streams into discrete event types on which event processing engines can operate. Additionally, a set of essential reference processes in the domain of information filtering and text stream analysis were described using eventdriven concepts. In a second step, a reference architecture was designed that described essential architectural components required for the design of information filtering and text stream analysis systems in an event-driven manner. Further to this, a set of architectural patterns for building event driven text analysis systems was derived that support the design and implementation of such systems. Subsequently, a prototype was built using the theoretic foundations. This system was initially used to study the effect of sliding window sizes on the properties of dynamic sub-corpora. It could be shown that small sliding window based corpora are similar to larger sliding windows and thus can be used as a resource-saving alternative. Next, a study of several linguistic aspects of text streams was undertaken that showed that event stream summary statistics can provide interesting insights into the characteristics of high velocity text streams. Finally, four essential information filtering and text stream analysis components were studied, viz. filter policies, term weighting, thresholds and query expansion. These were studied using three temporal search profile types and were evaluated using standard performance measures. The goal was to study the efficiency of traditional as well as new algorithms within the given context of high velocity text stream data, in order to provide advise which methods work best. The results of this dissertation are intended to provide software architects and developers with valuable information for the design and implementation of event-driven text stream analysis systems.",
"title": ""
}
] |
scidocsrr
|
1ee5c144dc3f48d1b1cc018d86bba030
|
VinySLAM: An indoor SLAM method for low-cost platforms based on the Transferable Belief Model
|
[
{
"docid": "13d8008b246e1d3ee2a2b5b1688f1886",
"text": "TinySLAM [1] is one of the most simple SLAM methods but the original implementation [2] is based on the specific robot model and provided as the ad-hoc application. Its key feature is simplicity of implementation and configuration at cost of accuracy (as our tests shown). Some changes were made in the original algorithm in order to minimize an error of estimated trajectories. The introduced model of cell leads to an error decrease on almost all tested indoor sequences. The proposed dynamic probability estimator improves usage of coarse-grained maps when memory efficiency is more desirable than accuracy. Obtained quantitative measurements justify the changes made in tinySLAM in case the method is used in a relatively small environment.",
"title": ""
},
{
"docid": "7399a8096f56c46a20715b9f223d05bf",
"text": "Recently, Rao-Blackwellized particle filters (RBPF) have been introduced as an effective means to solve the simultaneous localization and mapping problem. This approach uses a particle filter in which each particle carries an individual map of the environment. Accordingly, a key question is how to reduce the number of particles. In this paper, we present adaptive techniques for reducing this number in a RBPF for learning grid maps. We propose an approach to compute an accurate proposal distribution, taking into account not only the movement of the robot, but also the most recent observation. This drastically decreases the uncertainty about the robot's pose in the prediction step of the filter. Furthermore, we present an approach to selectively carry out resampling operations, which seriously reduces the problem of particle depletion. Experimental results carried out with real mobile robots in large-scale indoor, as well as outdoor, environments illustrate the advantages of our methods over previous approaches",
"title": ""
}
] |
[
{
"docid": "29509d1f63d155dfa63efcf8d4102283",
"text": "The purpose of this work was to determine the effects of varying levels of dietary protein on body composition and muscle protein synthesis during energy deficit (ED). A randomized controlled trial of 39 adults assigned the subjects diets providing protein at 0.8 (recommended dietary allowance; RDA), 1.6 (2×-RDA), and 2.4 (3×-RDA) g kg(-1) d(-1) for 31 d. A 10-d weight-maintenance (WM) period was followed by a 21 d, 40% ED. Body composition and postabsorptive and postprandial muscle protein synthesis were assessed during WM (d 9-10) and ED (d 30-31). Volunteers lost (P<0.05) 3.2 ± 0.2 kg body weight during ED regardless of dietary protein. The proportion of weight loss due to reductions in fat-free mass was lower (P<0.05) and the loss of fat mass was higher (P<0.05) in those receiving 2×-RDA and 3×-RDA compared to RDA. The anabolic muscle response to a protein-rich meal during ED was not different (P>0.05) from WM for 2×-RDA and 3×-RDA, but was lower during ED than WM for those consuming RDA levels of protein (energy × protein interaction, P<0.05). To assess muscle protein metabolic responses to varied protein intakes during ED, RDA served as the study control. In summary, we determined that consuming dietary protein at levels exceeding the RDA may protect fat-free mass during short-term weight loss.",
"title": ""
},
{
"docid": "014ff12b51ce9f4399bca09e0dedabed",
"text": "The crystallographic preferred orientation (CPO) of olivine produced during dislocation creep is considered to be the primary cause of elastic anisotropy in Earth’s upper mantle and is often used to determine the direction of mantle flow. A fundamental question remains, however, as to whether the alignment of olivine crystals is uniquely produced by dislocation creep. Here we report the development of CPO in iron-free olivine (that is, forsterite) during diffusion creep; the intensity and pattern of CPO depend on temperature and the presence of melt, which control the appearance of crystallographic planes on grain boundaries. Grain boundary sliding on these crystallography-controlled boundaries accommodated by diffusion contributes to grain rotation, resulting in a CPO. We show that strong radial anisotropy is anticipated at temperatures corresponding to depths where melting initiates to depths where strongly anisotropic and low seismic velocities are detected. Conversely, weak anisotropy is anticipated at temperatures corresponding to depths where almost isotropic mantle is found. We propose diffusion creep to be the primary means of mantle flow.",
"title": ""
},
{
"docid": "dd37e97635b0ded2751d64cafcaa1aa4",
"text": "DEVICES, AND STRUCTURES By S.E. Lyshevshi, CRC Press, 2002. This book is the first of the CRC Press “Nanoand Microscience, Engineering, Technology, and Medicine Series,” of which the author of this book is also the editor. This book could be a textbook of a semester course on microelectro mechanical systems (MEMS) and nanoelectromechanical systems (NEMS). The objective is to cover the topic from basic theory to the design and development of structures of practical devices and systems. The idea of MEMS and NEMS is to utilize and further extend the technology of integrated circuits (VLSI) to nanometer structures of mechanical and biological devices for potential applications in molecular biology and medicine. MEMS and NEMS (nanotechnology) are hot topics in the future development of electronics. The interest is not limited to electrical engineers. In fact, many scientists and researchers are interested in developing MEMS and NEMS for biological and medical applications. Thus, this field has attracted researchers from many different fields. Many new books are coming out. This book seems to be the first one aimed to be a textbook for this field, but it is very hard to write a book for readers with such different backgrounds. The author of this book has emphasized computer modeling, mostly due to his research interest in this field. It would be good to provide coverage on biological and medical MEMS, for example, by reporting a few gen or DNA-related cases. Furthermore, the mathematical modeling in term of a large number of nonlinear coupled differential equations, as used in many places in the book, does not appear to have any practical value to the actual physical structures.",
"title": ""
},
{
"docid": "c1694750a148296c8b907eb6d1a86074",
"text": "A field experiment was carried out to implement a remote sensing energy balance (RSEB) algorithm for estimating the incoming solar radiation (Rsi), net radiation (Rn), sensible heat flux (H), soil heat flux (G) and latent heat flux (LE) over a drip-irrigated olive (cv. Arbequina) orchard located in the Pencahue Valley, Maule Region, Chile (35 ̋251S; 71 ̋441W; 90 m above sea level). For this study, a helicopter-based unmanned aerial vehicle (UAV) was equipped with multispectral and infrared thermal cameras to obtain simultaneously the normalized difference vegetation index (NDVI) and surface temperature (Tsurface) at very high resolution (6 cm ˆ 6 cm). Meteorological variables and surface energy balance components were measured at the time of the UAV overpass (near solar noon). The performance of the RSEB algorithm was evaluated using measurements of H and LE obtained from an eddy correlation system. In addition, estimated values of Rsi and Rn were compared with ground-truth measurements from a four-way net radiometer while those of G were compared with soil heat flux based on flux plates. Results indicated that RSEB algorithm estimated LE and H with errors of 7% and 5%, respectively. Values of the root mean squared error (RMSE) and mean absolute error (MAE) for LE were 50 and 43 W m ́2 while those for H were 56 and 46 W m ́2, respectively. Finally, the RSEB algorithm computed Rsi, Rn and G with error less than 5% and with values of RMSE and MAE less than 38 W m ́2. Results demonstrated that multispectral and thermal cameras placed on an UAV could provide an excellent tool to evaluate the intra-orchard spatial variability of Rn, G, H, LE, NDVI and Tsurface over the tree canopy and soil surface between rows.",
"title": ""
},
{
"docid": "33985b2a3a5ef35539cd72532505374b",
"text": "A 36‐year‐old female patient presented with gradually enlarging and painful bleeding of her lower lip within 20 years. The patient did not define mechanical irritation, smoking, atopic state, chronic sun exposure, or photosensitivity. She was on oral antidiabetic treatment for Type 1 diabetes mellitus for 5 years. She did not have xerophthalmia, xerostomia, or arthritis. Organomegaly, lymphadenopathy, or palpable mass or any glandular involvement such as submandibular, sublingual, lacrimal, and parotid glands were not detected. Dermatological examination revealed fine desquamation, lacy white streaks, dilated and erythematous multiple milimetric ductal openings, and mild serous discharge by palpation [Figure 1]. There was no vermilion or adjacent skin involvement. A wedge resection biopsy of the lower lip showed epidermal keratinization, granular layer, apoptotic cells lined in the basal layer, and lichenoid inflammation. Chronic lymphocytic inflammation of the minor salivary glands, periductal dense lymphocytic inflammation, and mild ductal ectasia were detected in the dermis [Figure 2]. The inflammatory infiltrate in the tissue did not contain any plasma cells staining with CD138. Periodic acid Schiff/alcian blue (PAS/AB) staining did not show any dermal mucin or thick basal membrane. Fibrosis and obliterative phlebitis within the tissue were not present.",
"title": ""
},
{
"docid": "097414fbbbf19f7b244d4726d5d27f96",
"text": "Touch is both the first sense to develop and a critical means of information acquisition and environmental manipulation. Physical touch experiences may create an ontological scaffold for the development of intrapersonal and interpersonal conceptual and metaphorical knowledge, as well as a springboard for the application of this knowledge. In six experiments, holding heavy or light clipboards, solving rough or smooth puzzles, and touching hard or soft objects nonconsciously influenced impressions and decisions formed about unrelated people and situations. Among other effects, heavy objects made job candidates appear more important, rough objects made social interactions appear more difficult, and hard objects increased rigidity in negotiations. Basic tactile sensations are thus shown to influence higher social cognitive processing in dimension-specific and metaphor-specific ways.",
"title": ""
},
{
"docid": "aed21c9b9a244c12f1463990c990c787",
"text": "This paper gives an overview of the potentials and limitations of bibliometric methods for the assessment of strengths and weaknesses in research performance, and for monitoring scientific developments. We distinguish two different methods. In the first application, research performance assessment, the bibliometric method is based on advanced analysis of publication and citation data. We show that the resulting indicators are very useful, and in fact an indispensable element next to peer review in research evaluation procedures. Indicators based on advanced bibliometric methods offer much more than ‘only numbers’. They provide insight into the position of actors at the research front in terms of influence and specializations, as well as into patterns of scientific communication and processes of knowledge dissemination. After a discussion of technical and methodological problems, we present practical examples of the use of research performance indicators. In the second application, monitoring scientific developments, bibliometric methods based on advanced mapping techniques are essential. We discuss these techniques briefly and indicate their most important potentials, particularly their role in foresight exercises. Finally, we give a first outline of how both bibliometric approaches can be combined to a broader and powerful methodology to observe scientific advancement and the role of actors.",
"title": ""
},
{
"docid": "bd24772c4f75f90fe51841aeb9632e4f",
"text": "Fifty years have passed since the publication of the first regression tree algorithm. New techniques have added capabilities that far surpass those of the early methods. Modern classification trees can partition the data with linear splits on subsets of variables and fit nearest neighbor, kernel density, and other models in the partitions. Regression trees can fit almost every kind of traditional statistical model, including least-squares, quantile, logistic, Poisson, and proportional hazards models, as well as models for longitudinal and multiresponse data. Greater availability and affordability of software (much of which is free) have played a significant role in helping the techniques gain acceptance and popularity in the broader scientific community. This article surveys the developments and briefly reviews the key ideas behind some of the major algorithms.",
"title": ""
},
{
"docid": "6a1e5ffdcac8d22cfc8f9c2fc1ca0e17",
"text": "Magnetic resonance images (MRI) play an important role in supporting and substituting clinical information in the diagnosis of multiple sclerosis (MS) disease by presenting lesion in brain MR images. In this paper, an algorithm for MS lesion segmentation from Brain MR Images has been presented. We revisit the modification of properties of fuzzy -c means algorithms and the canny edge detection. By changing and reformed fuzzy c-means clustering algorithms, and applying canny contraction principle, a relationship between MS lesions and edge detection is established. For the special case of FCM, we derive a sufficient condition and clustering parameters, allowing identification of them as (local) minima of the objective function.",
"title": ""
},
{
"docid": "24604a884715e4e65094d5051b3b574c",
"text": "We describe the use of two spike-and-slab models for modeling real-valued data, with an emphasis on their applications to object recognition. The first model, which we call spike-and-slab sparse coding (S3C), is a preexisting model for which we introduce a faster approximate inference algorithm. We introduce a deep variant of S3C, which we call the partially directed deep Boltzmann machine (PD-DBM) and extend our S3C inference algorithm for use on this model. We describe learning procedures for each. We demonstrate that our inference procedure for S3C enables scaling the model to unprecedented large problem sizes, and demonstrate that using S3C as a feature extractor results in very good object recognition performance, particularly when the number of labeled examples is low. We show that the PD-DBM generates better samples than its shallow counterpart, and that unlike DBMs or DBNs, the PD-DBM may be trained successfully without greedy layerwise training.",
"title": ""
},
{
"docid": "a9e26514ffc78c1018e00c63296b9584",
"text": "When labeled examples are limited and difficult to obtain, transfer learning employs knowledge from a source domain to improve learning accuracy in the target domain. However, the assumption made by existing approaches, that the marginal and conditional probabilities are directly related between source and target domains, has limited applicability in either the original space or its linear transformations. To solve this problem, we propose an adaptive kernel approach that maps the marginal distribution of target-domain and source-domain data into a common kernel space, and utilize a sample selection strategy to draw conditional probabilities between the two domains closer. We formally show that under the kernel-mapping space, the difference in distributions between the two domains is bounded; and the prediction error of the proposed approach can also be bounded. Experimental results demonstrate that the proposed method outperforms both traditional inductive classifiers and the state-of-the-art boosting-based transfer algorithms on most domains, including text categorization and web page ratings. In particular, it can achieve around 10% higher accuracy than other approaches for the text categorization problem. The source code and datasets are available from the authors.",
"title": ""
},
{
"docid": "e03444a976fbacb91df3a32ff0f27e6f",
"text": "In past few years, mobile wallet took spotlight as alternative of existing payment solution in many countries such as USA, South Korea, Germany and China. Although considered as one of the most convenient payment, mobile wallet only claimed 1% from total electronic payment transaction in Indonesia. The aim of this study is to identify the behavior and user acceptance factors of mobile wallet technology. Online survey was conducted among 372 respondents to test hypothesis based on UTAUT2 model. Respondents consisted of 61.29% of male and 38.71% of female with age proportion was dominated by age group of 20's of 78.76%. In addition, 50.81% of respondents never used mobile wallet before and 49.19% of respondents have ever used mobile wallet. Data obtained were confirmed using confirmatory factor analysis and analyzed using structural equation model. The study found that habit was the factor that most strongly affected individual behavioral intention to use mobile wallet in Indonesia, followed by social influence, effort expectancy and hedonic motivation. The findings of this research for management can be used as consideration for making product decision related to mobile wallet. Further study is needed, as mobile wallet is still in early stage and another factor beside UTAUT2 should be considered in the study.",
"title": ""
},
{
"docid": "a6d0c3a9ca6c2c4561b868baa998dace",
"text": "Diprosopus or duplication of the lower lip and mandible is a very rare congenital anomaly. We report this unusual case occurring in a girl who presented to our hospital at the age of 4 months. Surgery and problems related to this anomaly are discussed.",
"title": ""
},
{
"docid": "ebcf7d8a4f28527760dc6a480b684d10",
"text": "High-performance liquid chromatography (HPLC) combined with diode array (DAD) and electrospray ionization mass spectrometric (ESI-MS) detections were used to characterize anthocyanins in the berries of Vaccinium arctostaphylos L. The dark purple-black berries were collected from five Caucasian blueberry populations in northeastern Turkey. The HPLC-DAD profile consisted of 19 anthocyanin peaks, but HPLC-ESI-MS revealed fragment ion patterns of 26 anthocyanins. Delphinidin, cyanidin, petunidin, peonidin, and malvidin were all glycosylated with four different monosaccharide moieties (galactose, glucose, arabinose, and xylose) with the first two also conjugated with rhamnose. Furthermore, anthocyanidin disaccharides, tentatively identified as anthocyanidin sambubiosides, were characteristic for these berries. The mean content of the total anthocyanins was 1420 mg/100 g dry weight. The most predominant anthocyanidins were delphinidin (41%), petunidin (19%), and malvidin (19%). Glucose was the most typical (61%) sugar moiety. This study revealed that wild Caucasian blueberries contain an abundance of bioactive anthocyanins and thus are ideal for various functional food purposes.",
"title": ""
},
{
"docid": "9e5d38fa22500ff30888a3d71d938676",
"text": "While there are many Web services which help users nd things to buy, we know of none which actually try to automate the process of buying and selling. Kasbah is a virtual marketplace on the Web where users create autonomous agents to buy and sell goods on their behalf. Users specify parameters to guide and constrain an agent's overall behavior. A simple prototype has been built to test the viability of this concept.",
"title": ""
},
{
"docid": "d638bf6a0ec3354dd6ba90df0536aa72",
"text": "Selected elements of dynamical system (DS) theory approach to nonlinear time series analysis are introduced. Key role in this concept plays a method of time delay. The method enables us reconstruct phase space trajectory of DS without knowledge of its governing equations. Our variant is tested and compared with wellknown TISEAN package for Lorenz and Hénon systems. Introduction There are number of methods of nonlinear time series analysis (e.g. nonlinear prediction or noise reduction) that work in a phase space (PS) of dynamical systems. We assume that a given time series of some variable is generated by a dynamical system. A specific state of the system can be represented by a point in the phase space and time evolution of the system creates a trajectory in the phase space. From this point of view we consider our time series to be a projection of trajectory of DS to one (or more – when we have more simultaneously measured variables) coordinates of phase space. This view was enabled due to formulation of embedding theorem [1], [2] at the beginning of the 1980s. It says that it is possible to reconstruct the phase space from the time series. One of the most frequently used methods of phase space reconstruction is the method of time delay. The main task while using this method is to determine values of time delay τ and embedding dimension m. We tested individual steps of this method on simulated data generated by Lorenz and Hénon systems. We compared results computed by our own programs with outputs of program package TISEAN created by R. Hegger, H. Kantz, and T. Schreiber [3]. Method of time delay The most frequently used method of PS reconstruction is the method of time delay. If we have a time series of a scalar variable we construct a vector ( ) , ,..., 1 , N i t x i = in phase space in time ti as following: ( ) ( ) ( ) ( ) ( ) ( ) [ ], 1 ,..., 2 , , τ τ τ − + + + = m t x t x t x t x t i i i i i X where i goes from 1 to N – (m – 1)τ, τ is time delay, m is a dimension of reconstructed space (embedding dimension) and M = N – (m – 1)τ is number of points (states) in the phase space. According to embedding theorem, when this is done in a proper way, dynamics reconstructed using this formula is equivalent to the dynamics on an attractor in the origin phase space in the sense that characteristic invariants of the system are conserved. The time delay method and related aspects are described in literature, e.g. [4]. We estimated the two parameters—time delay and embedding dimension—using algorithms below. Choosing a time delay To determine a suitable time delay we used average mutual information (AMI), a certain generalization of autocorrelation function. Average mutual information between sets of measurements A and B is defined [5]:",
"title": ""
},
{
"docid": "d24331326c59911f9c1cdc5dd5f14845",
"text": "A novel topology for a soft-switching buck dc– dc converter with a coupled inductor is proposed. The soft-switching buck converter has advantages over the traditional hardswitching converters. The most significant advantage is that it offers a lower switching loss. This converter operates under a zero-current switching condition at turn on and a zero-voltage switching condition at turn off. It presents the circuit configuration with a least components for realizing soft switching. Because of soft switching, the proposed converter can attain a high efficiency under heavy load conditions. Likewise, a high efficiency is also attained under light load conditions, which is significantly different from other soft switching buck converters",
"title": ""
},
{
"docid": "901debd94cb5749a9a1f06b0fd0cb155",
"text": "• Business process reengineering-the redesign of an organization's business processes to make them more efficient. • Coordination technology-an aid to managing dependencies among the agents within a business process, and provides automated support for the most routinized component processes. * Process-driven software development environments-an automated system for integrating the work of all software related management and staff; it provides embedded support for an orderly and defined software development process. These three applications share a growing requirement to represent the processes through which work is accomplished. To the extent that automation is involved, process representation becomes a vital issue in redesigning work and allocating responsibilities between humans and computers. This requirement reflects the growing use of distributed , networked systems to link the interacting agents responsible for executing a business process. To establish process modeling as a unique area, researchers must identify conceptual boundaries that distinguish their work from model-ing in other areas of information science. Process modeling is distinguished from other types of model-ing in computer science because many of the phenomena modeled must be enacted by a human rather than a machine. At least some mod-eling, however, in the area of human-machine system integration or information systems design has this 'human-executable' attribute. Rather than focusing solely on the user's behavior at the interface or the flow and transformation of data within the system, process model-ing also focuses on interacting behaviors among agents, regardless of whether a computer is involved in the transactions. Much of the research on process modeling has been conducted on software development organizations , since the software engineering community is already accustomed to formal modeling. Software process modeling, in particular , explicitly focuses on phenomena that occur during software creation and evolution, a domain different from that usually mod-eled in human-machine integration or information systems design. Software development is a challenging focus for process modeling because of the creative problem-solving involved in requirements analysis and design, and the coordination of team interactions during the development of a complex intellectual artifact. In this article, software process modeling will be used as an example application for describing the current status of process modeling, issues for practical use, and the research questions that remain ahead. Most software organizations possess several yards of software life cycle description, enough to wrap endlessly around the walls of project rooms. Often these descriptions do not correspond to the processes actually performed during software …",
"title": ""
},
{
"docid": "475ad4a81e5cb6bcffbeb77cae320c44",
"text": "Medical image processing is a very active and fast-growing field that has evolved into an established discipline. Accurate segmentation of medical images is a fundamental step in clinical studies for diagnosis, monitoring, and treatment planning. Manual segmentation of medical images is a time consuming and a tedious task. Therefore the automated segmentation algorithms with high accuracy are of interest. There are several critical factors that determine the performance of a segmentation algorithm. Examples are: the area of application of segmentation technique, reproducibility of the method, accuracy of the results, etc. The purpose of this review is to provide an overview of current image segmentation methods. Their relative efficiency, advantages, and the problems they encounter are discussed. In order to evaluate the segmentation results, some popular benchmark measurements are presented.",
"title": ""
}
] |
scidocsrr
|
4035829baddbf5c989ae61068b8aec28
|
Review of Heart Disease Prediction System Using Data Mining and Hybrid Intelligent Techniques
|
[
{
"docid": "63a58b3b6eb46cdd92b9c241b1670926",
"text": "The Healthcare industry is generally "information rich", but unfortunately not all the data are mined which is required for discovering hidden patterns & effective decision making. Advanced data mining techniques are used to discover knowledge in database and for medical research, particularly in Heart disease prediction. This paper has analysed prediction systems for Heart disease using more number of input attributes. The system uses medical terms such as sex, blood pressure, cholesterol like 13 attributes to predict the likelihood of patient getting a Heart disease. Until now, 13 attributes are used for prediction. This research paper added two more attributes i. e. obesity and smoking. The data mining classification techniques, namely Decision Trees, Naive Bayes, and Neural Networks are analyzed on Heart disease database. The performance of these techniques is compared, based on accuracy. As per our results accuracy of Neural Networks, Decision Trees, and Naive Bayes are 100%, 99. 62%, and 90. 74% respectively. Our analysis shows that out of these three classification models Neural Networks predicts Heart disease with highest accuracy.",
"title": ""
},
{
"docid": "8d9a02974ad85aa508dc0f7a85a669f1",
"text": "The successful application of data mining in highly visible fields like e-business, marketing and retail has led to its application in other industries and sectors. Among these sectors just discovering is healthcare. The healthcare environment is still „information rich‟ but „knowledge poor‟. There is a wealth of data available within the healthcare systems. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. This research paper intends to provide a survey of current techniques of knowledge discovery in databases using data mining techniques that are in use in today‟s medical research particularly in Heart Disease Prediction. Number of experiment has been conducted to compare the performance of predictive data mining technique on the same dataset and the outcome reveals that Decision Tree outperforms and some time Bayesian classification is having similar accuracy as of decision tree but other predictive methods like KNN, Neural Networks, Classification based on clustering are not performing well. The second conclusion is that the accuracy of the Decision Tree and Bayesian Classification further improves after applying genetic algorithm to reduce the actual data size to get the optimal subset of attribute sufficient for heart disease prediction.",
"title": ""
}
] |
[
{
"docid": "77320edf2d8da853b873c71e26802c6e",
"text": "Content Delivery Network (CDN) services largely affect the delivery quality perceived by users. While those services were initially offered by independent entities, some large ISP now develop their own CDN activities to control costs and delivery quality. But this new activity is also a new source of revenues for those vertically integrated ISP-CDNs, which can sell those services to content providers. In this paper, we investigate the impact of having an ISP and a vertically-integrated CDN, on the main actors of the ecosystem (users, competing ISPs). Our approach is based on an economic model of revenues and costs, and a multilevel game-theoretic formulation of the interactions among actors. Our model incorporates the possibility for the vertically-integrated ISP to partially offer CDN services to competitors in order to optimize the trade-off between CDN revenue (if fully offered) and competitive advantage on subscriptions at the ISP level (if not offered to competitors). Our results highlight two counterintuitive phenomena: an ISP may prefer an independent CDN over controlling (integrating) a CDN, and from the user point of view vertical integration is preferable to an independent CDN or a no-CDN configuration. Hence, a regulator may want to elicit such CDN-ISP vertical integrations rather than prevent them.",
"title": ""
},
{
"docid": "e61b6ae5d763fb135093cdfa035b82bf",
"text": "Computer-mediated communication is driving fundamental changes in the nature of written language. We investigate these changes by statistical analysis of a dataset comprising 107 million Twitter messages (authored by 2.7 million unique user accounts). Using a latent vector autoregressive model to aggregate across thousands of words, we identify high-level patterns in diffusion of linguistic change over the United States. Our model is robust to unpredictable changes in Twitter's sampling rate, and provides a probabilistic characterization of the relationship of macro-scale linguistic influence to a set of demographic and geographic predictors. The results of this analysis offer support for prior arguments that focus on geographical proximity and population size. However, demographic similarity - especially with regard to race - plays an even more central role, as cities with similar racial demographics are far more likely to share linguistic influence. Rather than moving towards a single unified \"netspeak\" dialect, language evolution in computer-mediated communication reproduces existing fault lines in spoken American English.",
"title": ""
},
{
"docid": "19917b734907c41e97c24120fb5be495",
"text": "Providing various wireless connectivities for vehicles enables the communication between vehicles and their internal and external environments. Such a connected vehicle solution is expected to be the next frontier for automotive revolution and the key to the evolution to next generation intelligent transportation systems (ITSs). Moreover, connected vehicles are also the building blocks of emerging Internet of Vehicles (IoV). Extensive research activities and numerous industrial initiatives have paved the way for the coming era of connected vehicles. In this paper, we focus on wireless technologies and potential challenges to provide vehicle-to-x connectivity. In particular, we discuss the challenges and review the state-of-the-art wireless solutions for vehicle-to-sensor, vehicle-to-vehicle, vehicle-to-Internet, and vehicle-to-road infrastructure connectivities. We also identify future research issues for building connected vehicles.",
"title": ""
},
{
"docid": "5ab7e9ccf859c06a0a2056c78121ff4b",
"text": "Building Information Modelling (BIM) is an expansive knowledge domain within the Design, Construction and Operation (DCO) industry",
"title": ""
},
{
"docid": "d4ffc874c00d91812283909c0024d6b3",
"text": "This article provides a comprehensive and comparative overview of question answering technology. It presents the question answering task from an information retrieval perspective and emphasises the importance of retrieval models, i.e., representations of queries and information documents, and retrieval functions which are used for estimating the relevance between a query and an answer candidate. The survey suggests a general question answering architecture that steadily increases the complexity of the representation level of questions and information objects. On the one hand, natural language queries are reduced to keyword-based searches, on the other hand, knowledge bases are queried with structured or logical queries obtained from the natural language questions, and answers are obtained through reasoning. We discuss different levels of processing yielding bagof-words-based and more complex representations integrating part-of-speech tags, classification of the expected answer type, semantic roles, discourse analysis, translation into a SQL-like language and logical representations. 2011 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "0d6a28cc55d52365986382f43c28c42c",
"text": "Predictive analytics embraces an extensive range of techniques including statistical modeling, machine learning, and data mining and is applied in business intelligence, public health, disaster management and response, and many other fields. To date, visualization has been broadly used to support tasks in the predictive analytics pipeline. Primary uses have been in data cleaning, exploratory analysis, and diagnostics. For example, scatterplots and bar charts are used to illustrate class distributions and responses. More recently, extensive visual analytics systems for feature selection, incremental learning, and various prediction tasks have been proposed to support the growing use of complex models, agent-specific optimization, and comprehensive model comparison and result exploration. Such work is being driven by advances in interactive machine learning and the desire of end-users to understand and engage with the modeling process. In this state-of-the-art report, we catalogue recent advances in the visualization community for supporting predictive analytics. First, we define the scope of predictive analytics discussed in this article and describe how visual analytics can support predictive analytics tasks in a predictive visual analytics (PVA) pipeline. We then survey the literature and categorize the research with respect to the proposed PVA pipeline. Systems and techniques are evaluated in terms of their supported interactions, and interactions specific to predictive analytics are discussed. We end this report with a discussion of challenges and opportunities for future research in predictive visual analytics.",
"title": ""
},
{
"docid": "1a17f5bebb430ade117f449a2837b10b",
"text": "Traditional Scene Understanding problems such as Object Detection and Semantic Segmentation have made breakthroughs in recent years due to the adoption of deep learning. However, the former task is not able to localise objects at a pixel level, and the latter task has no notion of different instances of objects of the same class. We focus on the task of Instance Segmentation which recognises and localises objects down to a pixel level. Our model is based on a deep neural network trained for semantic segmentation. This network incorporates a Conditional Random Field with end-to-end trainable higher order potentials based on object detector outputs. This allows us to reason about instances from an initial, category-level semantic segmentation. Our simple method effectively leverages the great progress recently made in semantic segmentation and object detection. The accurate instance-level segmentations that our network produces is reflected by the considerable improvements obtained over previous work at high APr IoU thresholds.",
"title": ""
},
{
"docid": "e626b53b5d9f29e81ede1b977d3163c5",
"text": "Darshan is a lightweight I/O characterization tool used to gather and summarize salient I/O workload statistics from HPC applications. Darshan was designed to minimize any possible perturbations of an application’s performance, leading it to be enabled by default on a number of production HPC systems. For each file accessed by a given application, Darshan records the count and types of I/O operations performed, histograms of access sizes, cumulative timers on the amount of time spent doing I/O, and other statistical data. This type of data has proved invaluable in understanding and improving the I/O performance of HPC applications. Darshan 3.0.0 is the new modularized version of the traditional Darshan library and file format, allowing users to easily add more in-depth I/O characterization data to Darshan logs. In this work we perform an empirical evaluation of Darshan 3.0.0 to ensure that it continues to meet performance expectations for broad deployment. In particular, we evaluate the imposed overhead on instrumented I/O operations, time taken to shut down and generate corresponding Darshan log files, and resultant log file sizes for different workloads. These performance results are compared to results of Darshan 2.3.0 on the Edison XC30 system at NERSC to determine whether the new version is lightweight enough to run full-time on production HPC systems. Our evaluation shows that Darshan has limited impact on application I/O performance and can fully generate a corresponding log file for most application workloads in under two seconds.",
"title": ""
},
{
"docid": "6f88ebc92bf650f3bb38d170979ffed2",
"text": "In this paper, we present an automatic question generation system that can generate gap-fill questions for content in a document. Gap-fill questions are fill-in-the-blank questions with multiple choices (one correct answer and three distractors) provided. The system finds the informative sentences from the document and generates gap-fill questions from them by first blanking keys from the sentences and then determining the distractors for these keys. Syntactic and lexical features are used in this process without relying on any external resource apart from the information in the document. We evaluated our system on two chapters of a standard biology textbook and presented the results.",
"title": ""
},
{
"docid": "799573bf08fb91b1ac644c979741e7d2",
"text": "This short paper reports the method and the evaluation results of Casio and Shinshu University joint team for the ISBI Challenge 2017 – Skin Lesion Analysis Towards Melanoma Detection – Part 3: Lesion Classification hosted by ISIC. Our online validation score was 0.958 with melanoma classifier AUC 0.924 and seborrheic keratosis classifier AUC 0.993.",
"title": ""
},
{
"docid": "ac2a0f7165dd1bee4d5cbee2ac1604f8",
"text": "Curcumin (CUR), a natural polyphenol isolated from tumeric ( Curcuma longa ), has been documented to possess antioxidant and anticancer activities. Unfortunately, the compound has poor aqueous solubility, which results in poor bioavailability following high doses by oral administration. To improve the solubility of CUR, we developed a novel curcumin nanoparticle system (CURN) and investigated its physicochemical properties as well as its enhanced dissolution mechanism. Our results indicated that CURN improved the physicochemical properties of CUR, including a reduction in particle size and the formation of an amorphous state with hydrogen bonding, both of which increased the drug release of the compound. Moreover, in vitro studies indicated that CURN significantly enhanced the antioxidant and antihepatoma activities of CUR (P < 0.05). Consequently, we suggest that CURN can be used to reduce the dosage of CUR and improve its bioavailability and merits further investigation for therapeutic applications.",
"title": ""
},
{
"docid": "690888d679f93891d278bded0c1238fd",
"text": "The challenge of predicting future values of a time series covers a variety of disciplines. The fundamental problem of selecting the order and identifying the time varying parameters of an autoregressive moving average model (ARMA) concerns many important fields of interest such as linear prediction, system identification and spectral analysis. Recent research activities in forecasting with artificial neural networks (ANNs) suggest that ANNs can be a promising alternative to the traditional ARMA structure. These linear models and ANNs are often compared with mixed conclusions in terms of the superiority in forecasting performance. This study was designed: (a) to investigate a hybrid methodology that combines ANN and ARMA models; (b) to resolve one of the most important problems in time series using ARMA structure and Box–Jenkins methodology: the identification of the model. In this paper, we present a new procedure to predict time series using paradigms such as: fuzzy systems, neural networks and evolutionary algorithms. Our goal is to obtain an expert system based on paradigms of artificial intelligence, so that the linear model can be identified automatically, without the need of human expert participation. The obtained linear model will be combined with ANN, making up an hybrid system that could outperform the forecasting result. r 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "bf3436274102ee8291e4bc506a3efe4f",
"text": "The performance of a vehicle control strategy, in terms of fuel economy improvement and emission reduction, is strongly influenced by driving conditions and drivers' driving styles. The term of ‘driving conditions’ here means the traffic conditions and road type, which is usually indicated by standard driving cycles, say FTP 75 and NEDC; the term of ‘driving styles’ here relates to the drivers' behavior, especially how drivers apply pressure on acceleration and brake pedal. To realize optimal fuel economy, it is ideal to obtain the information of future driving conditions and drivers' driving styles. This paper summarizes the methods and parameters that have been utilized to attain this end as well as the results. Based on this study, methods and parameters can be better selected for further improvement of driving conditions prediction and driving style recognition based hybrid electric vehicle control strategy.",
"title": ""
},
{
"docid": "4cd9c7d6018920c5275c63e7bce663b9",
"text": "Bullying of lesbian, gay, bisexual, and transgender (LGBT) youth is prevalent in the United States, and represents LGBT stigma when tied to sexual orientation and/or gender identity or expression. LGBT youth commonly report verbal, relational, and physical bullying, and damage to property. Bullying undermines the well-being of LGBT youth, with implications for risky health behaviors, poor mental health, and poor physical health that may last into adulthood. Pediatricians can play a vital role in preventing and identifying bullying, providing counseling to youth and their parents, and advocating for programs and policies to address LGBT bullying.",
"title": ""
},
{
"docid": "b889b863e0344361be7d8eeafca872c5",
"text": "This paper presents a singular-value-based semi-fragile watermarking scheme for image content authentication. The proposed scheme generates secure watermark by performing a logical operation on content-dependent watermark generated by a singular-value-based sequence and contentindependent watermark generated by a private-key-based sequence. It next employs the adaptive quantization method to embed secure watermark in approximation subband of each 4 4 block to generate the watermarked image. The watermark extraction process then extracts watermark using the parity of quantization results from the probe image. The authentication process starts with regenerating secure watermark following the same process. It then constructs error maps to compute five authentication measures and performs a three-level process to authenticate image content and localize tampered areas. Extensive experimental results show that the proposed scheme outperforms five peer schemes and its two variant systems and is capable of identifying intentional tampering, incidental modification, and localizing tampered regions under mild to severe content-preserving modifications. 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "486d77b1e951e5c87454490c15d91ae5",
"text": "BACKGROUND\nThe influence of menopausal status on depressive symptoms is unclear in diverse ethnic groups. This study examined the longitudinal relationship between changes in menopausal status and the risk of clinically relevant depressive symptoms and whether the relationship differed according to initial depressive symptom level.\n\n\nMETHODS\n3302 African American, Chinese, Hispanic, Japanese, and White women, aged 42-52 years at entry into the Study of Women's Health Across the Nation (SWAN), a community-based, multisite longitudinal observational study, were evaluated annually from 1995 through 2002. Random effects multiple logistic regression analyses were used to determine the relationship between menopausal status and prevalence of low and high depressive symptom scores (CES-D <16 or > or =16) over 5 years.\n\n\nRESULTS\nAt baseline, 23% of the sample had elevated CES-D scores. A woman was more likely to report CES-D > or =16 when she was early peri-, late peri-, postmenopausal or currently/formerly using hormone therapy (HT), relative to when she was premenopausal (OR range 1.30 to 1.71). Effects were somewhat stronger for women with low CES-D scores at baseline. Health and psychosocial factors increased the odds of having a high CES-D and in some cases, were more important than menopausal status.\n\n\nLIMITATIONS\nWe used a measure of current depressive symptoms rather than a diagnosis of clinical depression. Thus, we can only make conclusions about symptoms current at annual assessments.\n\n\nCONCLUSION\nMost midlife women do not experience high depressive symptoms. Those that do are more likely to experience high depressive symptom levels when perimenopausal or postmenopausal than when premenopausal, independent of factors such as difficulty paying for basics, negative attitudes, poor perceived health, and stressful events.",
"title": ""
},
{
"docid": "df4ca9ed339707e2135ed1eebb564fa1",
"text": "Wireless-communication technology can be used to improve road safety and to provide Internet access inside vehicles. This paper proposes a cross-layer protocol called coordinated external peer communication (CEPEC) for Internet-access services and peer communications for vehicular networks. We assume that IEEE 802.16 base stations (BS) are installed along highways and that the same air interface is equipped in vehicles. Certain vehicles locating outside of the limited coverage of their nearest BSs can still get access to the Internet via a multihop route to their BSs. For Internet-access services, the objective of CEPEC is to increase the end-to-end throughput while providing a fairness guarantee in bandwidth usage among road segments. To achieve this goal, the road is logically partitioned into segments of equal length. A relaying head is selected in each segment that performs both local-packet collecting and aggregated packets relaying. The simulation results have shown that the proposed CEPEC protocol provides higher throughput with guaranteed fairness in multihop data delivery in vehicular networks when compared with the purely IEEE 802.16-based protocol.",
"title": ""
},
{
"docid": "bcb6ef3082d50038b456af4b942e75eb",
"text": "Vertebral angioma is a common bone tumor. We report a case of L1 vertebral angioma revealed by type A3.2 traumatic pathological fracture of the same vertebra. Management comprised emergency percutaneous osteosynthesis and, after stabilization of the multiple trauma, arterial embolization and percutaneous kyphoplasty.",
"title": ""
},
{
"docid": "d3c8903fed280246ea7cb473ee87c0e7",
"text": "Reaction time has a been a favorite subject of experimental psychologists since the middle of the nineteenth century. However, most studies ask questions about the organization of the brain, so the authors spend a lot of time trying to determine if the results conform to some mathematical model of brain activity. This makes these papers hard to understand for the beginning student. In this review, I have ignored these brain organization questions and summarized the major literature conclusions that are applicable to undergraduate laboratories using my Reaction Time software. I hope this review helps you write a good report on your reaction time experiment. I also apologize to reaction time researchers for omissions and oversimplifications.",
"title": ""
},
{
"docid": "59494d2a19ea2167f4095807ded28d67",
"text": "This paper describes extensions to the Kintinuous [1] algorithm for spatially extended KinectFusion, incorporating the following additions: (i) the integration of multiple 6DOF camera odometry estimation methods for robust tracking; (ii) a novel GPU-based implementation of an existing dense RGB-D visual odometry algorithm; (iii) advanced fused realtime surface coloring. These extensions are validated with extensive experimental results, both quantitative and qualitative, demonstrating the ability to build dense fully colored models of spatially extended environments for robotics and virtual reality applications while remaining robust against scenes with challenging sets of geometric and visual features.",
"title": ""
}
] |
scidocsrr
|
2103f167a2bb6b6912aa9bbcfdefb781
|
Video Segmentation with Background Motion Models
|
[
{
"docid": "35cbd0c888d230c4778d3bb14ab796e1",
"text": "Occlusion relations inform the partition of the image domain into “objects” but are difficult to determine from a single image or short-baseline video. We show how long-term occlusion relations can be robustly inferred from video, and used within a convex optimization framework to segment the image domain into regions. We highlight the challenges in determining these occluder/occluded relations and ensuring regions remain temporally consistent, propose strategies to overcome them, and introduce an efficient numerical scheme to perform the partition directly on the pixel grid, without the need for superpixelization or other preprocessing steps.",
"title": ""
},
{
"docid": "522345eb9b2e53f05bb9d961c85fea23",
"text": "In this work, we propose a novel approach to video segmentation that operates in bilateral space. We design a new energy on the vertices of a regularly sampled spatiotemporal bilateral grid, which can be solved efficiently using a standard graph cut label assignment. Using a bilateral formulation, the energy that we minimize implicitly approximates long-range, spatio-temporal connections between pixels while still containing only a small number of variables and only local graph edges. We compare to a number of recent methods, and show that our approach achieves state-of-the-art results on multiple benchmarks in a fraction of the runtime. Furthermore, our method scales linearly with image size, allowing for interactive feedback on real-world high resolution video.",
"title": ""
}
] |
[
{
"docid": "a00ac4cefbb432ffcc6535dd8fd56880",
"text": "Mobile activity recognition focuses on inferring current user activities by leveraging sensory data available on today's sensor rich mobile phones. Supervised learning with static models has been applied pervasively for mobile activity recognition. In this paper, we propose a novel phone-based dynamic recognition framework with evolving data streams for activity recognition. The novel framework incorporates incremental and active learning for real-time recognition and adaptation in streaming settings. While stream evolves, we refine, enhance and personalise the learning model in order to accommodate the natural drift in a given data stream. Extensive experimental results using real activity recognition data have evidenced that the novel dynamic approach shows improved performance of recognising activities especially across different users. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a144509c91a0cc8f50f0bb7e3d8dbdd6",
"text": "The prefrontal cortex is necessary for directing thought and planning action. Working memory, the active, transient maintenance of information in mind for subsequent monitoring and manipulation, lies at the core of many simple, as well as high-level, cognitive functions. Working memory has been shown to be compromised in a number of neurological and psychiatric conditions and may contribute to the behavioral and cognitive deficits associated with these disorders. It has been theorized that working memory depends upon reverberating circuits within the prefrontal cortex and other cortical areas. However, recent work indicates that intracellular signals and protein dephosphorylation are critical for working memory. The present article will review recent research into the involvement of the modulatory neurotransmitters and their receptors in working memory. The intracellular signaling pathways activated by these receptors and evidence that indicates a role for G(q)-initiated PI-PLC and calcium-dependent protein phosphatase calcineurin activity in working memory will be discussed. Additionally, the negative influence of calcium- and cAMP-dependent protein kinase (i.e., calcium/calmodulin-dependent protein kinase II (CaMKII), calcium/diacylglycerol-activated protein kinase C (PKC), and cAMP-dependent protein kinase A (PKA)) activities on working memory will be reviewed. The implications of these experimental findings on the observed inverted-U relationship between D(1) receptor stimulation and working memory, as well as age-associated working memory dysfunction, will be presented. Finally, we will discuss considerations for the development of clinical treatments for working memory disorders.",
"title": ""
},
{
"docid": "9497731525a996844714d5bdbca6ae03",
"text": "Recently, machine learning is widely used in applications and cloud services. And as the emerging field of machine learning, deep learning shows excellent ability in solving complex learning problems. To give users better experience, high performance implementations of deep learning applications seem very important. As a common means to accelerate algorithms, FPGA has high performance, low power consumption, small size and other characteristics. So we use FPGA to design a deep learning accelerator, the accelerator focuses on the implementation of the prediction process, data access optimization and pipeline structure. Compared with Core 2 CPU 2.3GHz, our accelerator can achieve promising result.",
"title": ""
},
{
"docid": "e121891a063a2a05a83c369a54b0ecea",
"text": "The number of vulnerabilities in open source libraries is increasing rapidly. However, the majority of them do not go through public disclosure. These unidentified vulnerabilities put developers' products at risk of being hacked since they are increasingly relying on open source libraries to assemble and build software quickly. To find unidentified vulnerabilities in open source libraries and secure modern software development, we describe an efficient automatic vulnerability identification system geared towards tracking large-scale projects in real time using natural language processing and machine learning techniques. Built upon the latent information underlying commit messages and bug reports in open source projects using GitHub, JIRA, and Bugzilla, our K-fold stacking classifier achieves promising results on vulnerability identification. Compared to the state of the art SVM-based classifier in prior work on vulnerability identification in commit messages, we improve precision by 54.55% while maintaining the same recall rate. For bug reports, we achieve a much higher precision of 0.70 and recall rate of 0.71 compared to existing work. Moreover, observations from running the trained model at SourceClear in production for over 3 months has shown 0.83 precision, 0.74 recall rate, and detected 349 hidden vulnerabilities, proving the effectiveness and generality of the proposed approach.",
"title": ""
},
{
"docid": "6544c01bbd76427c9078d7a2a7dad8d5",
"text": "Music is capable of inducing emotional arousal. While previous studies used brief musical excerpts to induce one specific emotion, the current study aimed to identify the physiological correlates of continuous changes in subjective emotional states while listening to a complete music piece. A total of 19 participants listened to the first movement of Ludwig van Beethoven’s 5th symphony (duration: ~7.4 min), during which a continuous 76-channel EEG was recorded. In a second session, the subjects evaluated their emotional arousal during the listening. A fast fourier transform was performed and covariance maps of spectral power were computed in association with the subjective arousal ratings. Subjective arousal ratings had good inter-individual correlations. Covariance maps showed a right-frontal suppression of lower alpha-band activity during high arousal. The results indicate that music is a powerful arousal-modulating stimulus. The temporal dynamics of the piece are well suited for sequential analysis, and could be necessary in helping unfold the full emotional power of music.",
"title": ""
},
{
"docid": "523677ed6d482ab6551f6d87b8ad761e",
"text": "To enable information integration, schema matching is a critical step for discovering semantic correspondences of attributes across heterogeneous sources. While complex matchings are common, because of their far more complex search space, most existing techniques focus on simple 1:1 matchings. To tackle this challenge, this article takes a conceptually novel approach by viewing schema matching as correlation mining, for our task of matching Web query interfaces to integrate the myriad databases on the Internet. On this “deep Web ” query interfaces generally form complex matchings between attribute groups (e.g., {author} corresponds to {first name, last name} in the Books domain). We observe that the co-occurrences patterns across query interfaces often reveal such complex semantic relationships: grouping attributes (e.g., {first name, last name}) tend to be co-present in query interfaces and thus positively correlated. In contrast, synonym attributes are negatively correlated because they rarely co-occur. This insight enables us to discover complex matchings by a correlation mining approach. In particular, we develop the DCM framework, which consists of data preprocessing, dual mining of positive and negative correlations, and finally matching construction. We evaluate the DCM framework on manually extracted interfaces and the results show good accuracy for discovering complex matchings. Further, to automate the entire matching process, we incorporate automatic techniques for interface extraction. Executing the DCM framework on automatically extracted interfaces, we find that the inevitable errors in automatic interface extraction may significantly affect the matching result. To make the DCM framework robust against such “noisy” schemas, we integrate it with a novel “ensemble” approach, which creates an ensemble of DCM matchers, by randomizing the schema data into many trials and aggregating their ranked results by taking majority voting. As a principled basis, we provide analytic justification of the robustness of the ensemble approach. Empirically, our experiments show that the “ensemblization” indeed significantly boosts the matching accuracy, over automatically extracted and thus noisy schema data. By employing the DCM framework with the ensemble approach, we thus complete an automatic process of matchings Web query interfaces.",
"title": ""
},
{
"docid": "f64e6f77891168c980e48ced53022184",
"text": "Text classification methods for tasks like factoid question answering typically use manually defined string matching rules or bag of words representations. These methods are ineffective when question text contains very few individual words (e.g., named entities) that are indicative of the answer. We introduce a recursive neural network (rnn) model that can reason over such input by modeling textual compositionality. We apply our model, qanta, to a dataset of questions from a trivia competition called quiz bowl. Unlike previous rnn models, qanta learns word and phrase-level representations that combine across sentences to reason about entities. The model outperforms multiple baselines and, when combined with information retrieval methods, rivals the best human players.",
"title": ""
},
{
"docid": "722b2d50bf854e002a0311f7511e433c",
"text": "The bat algorithm (BA) is a nature-inspired algorithm, which has recently been applied in many applications. BA can deal with both continuous optimization and discrete optimization problems. The literature has expanded significantly in the past few years, this paper provides a timely review of the latest developments. We also highlight some topics for further research.",
"title": ""
},
{
"docid": "cefcf529227d2d29780b09bb87b2c66c",
"text": "This paper presents a simple method o f trajectory generation of robot manipulators based on an optimal control problem formulation. It was found recently that the jerk, the third derivative of position, of the desired trajectory, adversely affects the efficiency of the control algorithms and therefore should be minimized. Assuming joint position, velocity and acceleration t o be constrained a cost criterion containing jerk is considered. Initially. the simple environment without obstacles and constrained by the physical l imitat ions o f the jo in t angles only i s examined. For practical reasons, the free execution t ime has been used t o handle the velocity and acceleration constraints instead of the complete bounded state variable formulation. The problem o f minimizing the jerk along an arbitrary Cartesian trajectory i s formulated and given analytical solution, making this method useful for real world environments containing obstacles.",
"title": ""
},
{
"docid": "de4ee63cd9bf19dff2c63e7bece833e1",
"text": "Big Data contains massive information, which are generating from heterogeneous, autonomous sources with distributed and anonymous platforms. Since, it raises extreme challenge to organizations to store and process these data. Conventional pathway of store and process is happening as collection of manual steps and it is consuming various resources. An automated real-time and online analytical process is the most cognitive solution. Therefore it needs state of the art approach to overcome barriers and concerns currently facing by the Big Data industry. In this paper we proposed a novel architecture to automate data analytics process using Nested Automatic Service Composition (NASC) and CRoss Industry Standard Platform for Data Mining (CRISPDM) as main based technologies of the solution. NASC is well defined scalable technology to automate multidisciplined problems domains. Since CRISP-DM also a well-known data science process which can be used as innovative accumulator of multi-dimensional data sets. CRISP-DM will be mapped with Big Data analytical process and NASC will automate the CRISP-DM process in an intelligent and innovative way.",
"title": ""
},
{
"docid": "24ade252fcc6bd5404484cb9ad5987a3",
"text": "The cornerstone of the IBM System/360 philosophy is that the architecture of a computer is basically independent of its physical implementation. Therefore, in System/360, different physical implementations have been made of the single architectural definition which is illustrated in Figure 1.",
"title": ""
},
{
"docid": "3a92798e81a03e5ef7fb18110e5da043",
"text": "BACKGROUND\nRespiratory failure is a serious complication that can adversely affect the hospital course and survival of multiply injured patients. Some studies have suggested that delayed surgical stabilization of spine fractures may increase the incidence of respiratory complications. However, the authors of these studies analyzed small sets of patients and did not assess the independent effects of multiple risk factors.\n\n\nMETHODS\nA retrospective cohort study was conducted at a regional level-I trauma center to identify risk factors for respiratory failure in patients with surgically treated thoracic and lumbar spine fractures. Demographic, diagnostic, and procedural variables were identified. The incidence of respiratory failure was determined in an adult respiratory distress syndrome registry maintained concurrently at the same institution. Univariate and multivariate analyses were used to determine independent risk factors for respiratory failure. An algorithm was formulated to predict respiratory failure.\n\n\nRESULTS\nRespiratory failure developed in 140 of the 1032 patients in the study cohort. Patients with respiratory failure were older; had a higher mean Injury Severity Score (ISS) and Charlson Comorbidity Index Score; had greater incidences of pneumothorax, pulmonary contusion, and thoracic level injury; had a lower mean Glasgow Coma Score (GCS); were more likely to have had a posterior surgical approach; and had a longer mean time from admission to surgical stabilization than the patients without respiratory failure (p < 0.05). Multivariate analysis identified five independent risk factors for respiratory failure: an age of more than thirty-five years, an ISS of > 25 points, a GCS of < or = 12 points, blunt chest injury, and surgical stabilization performed more than two days after admission. An algorithm was created to determine, on the basis of the number of preoperative predictors present, the relative risk of respiratory failure when surgery was delayed for more than two days.\n\n\nCONCLUSIONS\nIndependent risk factors for respiratory failure were identified in an analysis of a large cohort of patients who had undergone operative stabilization of thoracic and lumbar spine fractures. Early operative stabilization of these fractures, the only risk factor that can be controlled by the physician, may decrease the risk of respiratory failure in multiply injured patients.",
"title": ""
},
{
"docid": "c6005a99e6a60a4ee5f958521dcad4d3",
"text": "We document initial experiments with Canid, a freestanding, power-autonomous quadrupedal robot equipped with a parallel actuated elastic spine. Research into robotic bounding and galloping platforms holds scientific and engineering interest because it can both probe biological hypotheses regarding bounding and galloping mammals and also provide the engineering community with a new class of agile, efficient and rapidly-locomoting legged robots. We detail the design features of Canid that promote our goals of agile operation in a relatively cheap, conventionally prototyped, commercial off-the-shelf actuated platform. We introduce new measurement methodology aimed at capturing our robot’s “body energy” during real time operation as a means of quantifying its potential for agile behavior. Finally, we present joint motor, inertial and motion capture data taken from Canid’s initial leaps into highly energetic regimes exhibiting large accelerations that illustrate the use of this measure and suggest its future potential as a platform for developing efficient, stable, hence useful bounding gaits. For more information: Kod*Lab Disciplines Electrical and Computer Engineering | Engineering | Systems Engineering Comments BibTeX entry @article{canid_spie_2013, author = {Pusey, Jason L. and Duperret, Jeffrey M. and Haynes, G. Clark and Knopf, Ryan and Koditschek , Daniel E.}, title = {Free-Standing Leaping Experiments with a PowerAutonomous, Elastic-Spined Quadruped}, pages = {87410W-87410W-15}, year = {2013}, doi = {10.1117/ 12.2016073} } This work is supported by the National Science Foundation Graduate Research Fellowship under Grant Number DGE-0822, and by the Army Research Laboratory under Cooperative Agreement Number W911NF-10–2−0016. Copyright 2013 Society of Photo-Optical Instrumentation Engineers. Postprint version. This paper was (will be) published in Proceedings of the SPIE Defense, Security, and Sensing Conference, Unmanned Systems Technology XV (8741), and is made available as an electronic reprint with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/ese_papers/655 Free-Standing Leaping Experiments with a Power-Autonomous, Elastic-Spined Quadruped Jason L. Pusey a , Jeffrey M. Duperret b , G. Clark Haynes c , Ryan Knopf b , and Daniel E. Koditschek b a U.S. Army Research Laboratory, Aberdeen Proving Ground, MD, b University of Pennsylvania, Philadelphia, PA, c National Robotics Engineering Center, Carnegie Mellon University, Pittsburgh, PA",
"title": ""
},
{
"docid": "cbc2a96515b9f3917e40515a3829ee8d",
"text": "We present the accurate modeling and analysis, followed by experimental validation, of a 1024-element (64-by-16) antenna array. This fixed-beam array radiates linear polarization in Ka-band (19.7–20.2 GHz). It acts as a first step in the design and modeling of future antenna arrays for satcom-on-the-move applications. Accurate prediction of the behavior of such a large array is a challenging task since full-wave simulation of the entire structure cannot be considered. By taking advantage of existing formalisms on periodic arrays and by using appropriate methods to efficiently exploit such formulations, it is possible to accurately define the performances of all building blocks, from the feeding circuits to the radiating elements, over a frequency range. Such a detailed design also allows an accurate physical analysis. It has been successfully used to guarantee the measured performances. This paper is intended to detail different steps to antenna designers.",
"title": ""
},
{
"docid": "27bc95568467efccb3e6cc185e905e42",
"text": "Major studios and independent production firms (Indies) often have to select or “greenlight” a portfolio of scripts to turn into movies. Despite the huge financial risk at stake, there is currently no risk management tool they can use to aid their decisions, even though such a tool is sorely needed. In this paper, we developed a forecasting and risk management tool, based on movies scripts, to aid movie studios and production firms in their green-lighting decisions. The methodology developed can also assist outside investors if they have access to the scripts. Building upon and extending the previous literature, we extracted three levels of textual information (genre/content, bag-of-words, and semantics) from movie scripts. We then incorporate these textual variables as predictors, together with the contemplated production budget, into a BART-QL (Bayesian Additive Regression Tree for Quasi-Linear) model to obtain the posterior predictive distributions, rather than point forecasts, of the box office revenues for the corresponding movies. We demonstrate how the predictive distributions of box office revenues can potentially be used to help movie producers intelligently select their movie production portfolios based on their risk preferences, and we describe an illustrative analysis performed for an independent production firm.",
"title": ""
},
{
"docid": "fcd9a80d35a24c7222392c11d3376c72",
"text": "A dual-band coplanar waveguide (CPW)-fed hybrid antenna consisting of a 5.4 GHz high-band CPW-fed inductive slot antenna and a 2.4 GHz low-band bifurcated F-shaped monopole antenna is proposed and investigated experimentally. This antenna possesses an appealing characteristic that the CPW-fed inductive slot antenna reinforces and thus improves the radiation efficiency of the bifurcated monopole antenna. Moreover, due to field orthogonality, one band resonant frequency and return loss bandwidth of the proposed hybrid antenna allows almost independent optimization without noticeably affecting those of the other band.",
"title": ""
},
{
"docid": "5fcda05ef200cd326ecb9c2412cf50b3",
"text": "OBJECTIVE\nPalpable lymph nodes are common due to the reactive hyperplasia of lymphatic tissue mainly connected with local inflammatory process. Differential diagnosis of persistent nodular change on the neck is different in children, due to higher incidence of congenital abnormalities and infectious diseases and relative rarity of malignancies in that age group. The aim of our study was to analyse the most common causes of childhood cervical lymphadenopathy and determine of management guidelines on the basis of clinical examination and ultrasonographic evaluation.\n\n\nMATERIAL AND METHODS\nThe research covered 87 children with cervical lymphadenopathy. Age, gender and accompanying diseases of the patients were assessed. All the patients were diagnosed radiologically on the basis of ultrasonographic evaluation.\n\n\nRESULTS\nReactive inflammatory changes of bacterial origin were observed in 50 children (57.5%). Fever was the most common general symptom accompanying lymphadenopathy and was observed in 21 cases (24.1%). The ultrasonographic evaluation revealed oval-shaped lymph nodes with the domination of long axis in 78 patients (89.66%). The proper width of hilus and their proper vascularization were observed in 75 children (86.2%). Some additional clinical and laboratory tests were needed in the patients with abnormal sonographic image.\n\n\nCONCLUSIONS\nUltrasonographic imaging is extremely helpful in diagnostics, differentiation and following the treatment of childhood lymphadenopathy. Failure of regression after 4-6 weeks might be an indication for a diagnostic biopsy.",
"title": ""
},
{
"docid": "da7beedfca8e099bb560120fc5047399",
"text": "OBJECTIVE\nThis study aims to assess the relationship of late-night cell phone use with sleep duration and quality in a sample of Iranian adolescents.\n\n\nMETHODS\nThe study population consisted of 2400 adolescents, aged 12-18 years, living in Isfahan, Iran. Age, body mass index, sleep duration, cell phone use after 9p.m., and physical activity were documented. For sleep assessment, the Pittsburgh Sleep Quality Index questionnaire was used.\n\n\nRESULTS\nThe participation rate was 90.4% (n=2257 adolescents). The mean (SD) age of participants was 15.44 (1.55) years; 1270 participants reported to use cell phone after 9p.m. Overall, 56.1% of girls and 38.9% of boys reported poor quality sleep, respectively. Wake-up time was 8:17 a.m. (2.33), among late-night cell phone users and 8:03a.m. (2.11) among non-users. Most (52%) late-night cell phone users had poor sleep quality. Sedentary participants had higher sleep latency than their peers. Adjusted binary and multinomial logistic regression models showed that late-night cell users were 1.39 times more likely to have a poor sleep quality than non-users (p-value<0.001).\n\n\nCONCLUSION\nLate-night cell phone use by adolescents was associated with poorer sleep quality. Participants who were physically active had better sleep quality and quantity. As part of healthy lifestyle recommendations, avoidance of late-night cell phone use should be encouraged in adolescents.",
"title": ""
},
{
"docid": "809aed520d0023535fec644e81ddbb53",
"text": "This paper presents an efficient image denoising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). For a better preservation of image local structures, a pixel and its nearest neighbors are modeled as a vector variable, whose training samples are selected from the local window by using block matching based LPG. Such an LPG procedure guarantees that only the sample blocks with similar contents are used in the local statistics calculation for PCA transform estimation, so that the image local features can be well preserved after coefficient shrinkage in the PCA domain to remove the noise. The LPG-PCA denoising procedure is iterated one more time to further improve the denoising performance, and the noise level is adaptively adjusted in the second stage. Experimental results on benchmark test images demonstrate that the LPG-PCA method achieves very competitive denoising performance, especially in image fine structure preservation, compared with state-of-the-art denoising algorithms. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "406fbdfff4f7abb505c0e238e08decca",
"text": "A computationally efficient method for detecting a chorus section in popular and rock music is presented. The method utilizes a distance matrix representation that is obtained by summing two separate distance matrices calculated using the mel-frequency cepstral coefficient and pitch chroma features. The benefit of computing two separate distance matrices is that different enhancement operations can be applied on each. An enhancement operation is found beneficial only for the chroma distance matrix. This is followed by detection of the off-diagonal segments of small distance from the distance matrix. From the detected segments, an initial chorus section is selected using a scoring mechanism utilizing several heuristics, and subjected to further processing. This further processing involves using image processing filters in a neighborhood of the distance matrix surrounding the initial chorus section. The final position and length of the chorus is selected based on the filtering results. On a database of 206 popular & rock music pieces an average F-measure of 86% is obtained. It takes about ten seconds to process a song with an average duration of three to four minutes on a Windows XP computer with a 2.8 GHz Intel Xeon processor.",
"title": ""
}
] |
scidocsrr
|
71617a2559a2ff876637f8c7d9f17e48
|
Capsicum: Practical Capabilities for UNIX
|
[
{
"docid": "dbcae5be70fef927ccac30876b0a8bcf",
"text": "Many operating system services require special privilege to execute their tasks. A programming error in a privileged service opens the door to system compromise in the form of unauthorized acquisition of privileges. In the worst case, a remote attacker may obtain superuser privileges. In this paper, we discuss the methodology and design of privilege separation, a generic approach that lets parts of an application run with different levels of privilege. Programming errors occurring in the unprivileged parts can no longer be abused to gain unauthorized privileges. Privilege separation is orthogonal to capability systems or application confinement and enhances the security of such systems even further. Privilege separation is especially useful for system services that authenticate users. These services execute privileged operations depending on internal state not known to an application confinement mechanism. As a concrete example, the concept of privilege separation has been implemented in OpenSSH. However, privilege separation is equally useful for other authenticating services. We illustrate how separation of privileges reduces the amount of OpenSSH code that is executed with special privilege. Privilege separation prevents known security vulnerabilities in prior OpenSSH versions including some that were unknown at the time of its implementation.",
"title": ""
}
] |
[
{
"docid": "3bc9e621a0cfa7b8791ae3fb94eff738",
"text": "This paper deals with environment perception for automobile applications. Environment perception comprises measuring the surrounding field with onboard sensors such as cameras, radar, lidars, etc., and signal processing to extract relevant information for the planned safety or assistance function. Relevant information is primarily supplied using two well-known methods, namely, object based and grid based. In the introduction, we discuss the advantages and disadvantages of the two methods and subsequently present an approach that combines the two methods to achieve better results. The first part outlines how measurements from stereo sensors can be mapped onto an occupancy grid using an appropriate inverse sensor model. We employ the Dempster-Shafer theory to describe the occupancy grid, which has certain advantages over Bayes' theorem. Furthermore, we generate clusters of grid cells that potentially belong to separate obstacles in the field. These clusters serve as input for an object-tracking framework implemented with an interacting multiple-model estimator. Thereby, moving objects in the field can be identified, and this, in turn, helps update the occupancy grid more effectively. The first experimental results are illustrated, and the next possible research intentions are also discussed.",
"title": ""
},
{
"docid": "c59e72c374b3134e347674dccb86b0a4",
"text": "Lane detection and tracking and departure warning systems are important components of Intelligent Transportation Systems. They have particularly attracted great interest from industry and academia. Many architectures and commercial systems have been proposed in the literature. In this paper, we discuss the design of such systems regarding the following stages: pre-processing, detection, and tracking. For each stage, a short description of its working principle as well as their advantages and shortcomings are introduced. Our paper may possibly help in designing new systems that overcome and improve the shortcomings of current architectures.",
"title": ""
},
{
"docid": "09d4f38c87d6cc0e2cb6b1a7caad10f8",
"text": "Semidefinite programs (SDPs) can be solved in polynomial time by interior point methods, but scalability can be an issue. To address this shortcoming, over a decade ago, Burer and Monteiro proposed to solve SDPs with few equality constraints via rank-restricted, non-convex surrogates. Remarkably, for some applications, local optimization methods seem to converge to global optima of these non-convex surrogates reliably. Although some theory supports this empirical success, a complete explanation of it remains an open question. In this paper, we consider a class of SDPs which includes applications such as max-cut, community detection in the stochastic block model, robust PCA, phase retrieval and synchronization of rotations. We show that the low-rank Burer–Monteiro formulation of SDPs in that class almost never has any spurious local optima. This paper was corrected on April 9, 2018. Theorems 2 and 4 had the assumption that M (1) is a manifold. From this assumption it was stated that TYM = {Ẏ ∈ Rn×p : A(Ẏ Y >+ Y Ẏ >) = 0}, which is not true in general. To ensure this identity, the theorems now make the stronger assumption that gradients of the constraintsA(Y Y >) = b are linearly independent for all Y inM. All examples treated in the paper satisfy this assumption. Appendix D gives details.",
"title": ""
},
{
"docid": "9f1ec1e90fc40335705dea7580379093",
"text": "Segmentation of blood vessels in retinal images is an important part in retinal image analysis for diagnosis and treatment of eye diseases for large screening systems. In this paper, we addressed the problem of background and noise extraction from retinal images. Blood vessels usually have central light reflex and poor local contrast, hence the results yield by blood vessel segmentation algorithms are not satisfactory. We used different preprocessing steps which includes central light reflex removal, background homogenization and vessel enhancement to make retinal image noise-free for post-processing. We used mean and Gaussian filtering along with Top-Hat transformation for noise extraction. The preprocessing steps were applied on 40 retinal images of DRIVE database available publically. Results show the darker retinal structures like blood vessels, fovea, and possible presence of microaneurysms or hemorrhages, get enhanced as compared to original retinal image and the brighter structures like optic disc and possible presence of exudates were get removed . The presented technique will definitely improve automatic fundus images analysis also be very useful to eye specialists in their visual examination of retina.",
"title": ""
},
{
"docid": "9889cb9ae08cd177e6fa55c3ae7b8831",
"text": "Design and developmental procedure of strip-line based 1.5 MW, 30-96 MHz, ultra-wideband high power 3 dB hybrid coupler has been presented and its applicability in ion cyclotron resonance heating (ICRH) in tokamak is discussed. For the high power handling capability, spacing between conductors and ground need to very high. Hence other structural parameters like strip-width, strip thickness coupling gap, and junction also become large which can be gone upto optimum limit where various constrains like fabrication tolerance, discontinuities, and excitation of higher TE and TM modes become prominent and significantly deteriorates the desired parameters of the coupled lines system. In designed hybrid coupler, two 8.34 dB coupled lines are connected in tandem to get desired coupling of 3 dB and air is used as dielectric. The spacing between ground and conductors are taken as 0.164 m for 1.5 MW power handling capability. To have the desired spacing, each of 8.34 dB segments are designed with inner dimension of 3.6 × 1.0 × 40 cm where constraints have been significantly realized, compensated, and applied in designing of 1.5 MW hybrid coupler and presented in paper.",
"title": ""
},
{
"docid": "77754266da79a87b99e51b0088888550",
"text": "The paper proposed a novel automatic target recognition (ATR) system for classification of three types of ground vehicles in the moving and stationary target acquisition and recognition (MSTAR) public release database. First MSTAR image chips are represented as fine and raw feature vectors, where raw features compensate for the target pose estimation error that corrupts fine image features. Then, the chips are classified by using the adaptive boosting (AdaBoost) algorithm with the radial basis function (RBF) network as the base learner. Since the RBF network is a binary classifier, the multiclass problem was decomposed into a set of binary ones through the error-correcting output codes (ECOC) method, specifying a dictionary of code words for the set of three possible classes. AdaBoost combines the classification results of the RBF network for each binary problem into a code word, which is then \"decoded\" as one of the code words (i.e., ground-vehicle classes) in the specified dictionary. Along with classification, within the AdaBoost framework, we also conduct efficient fusion of the fine and raw image-feature vectors. The results of large-scale experiments demonstrate that our ATR scheme outperforms the state-of-the-art systems reported in the literature",
"title": ""
},
{
"docid": "399f3c7320e8a63da78c9701afbdf842",
"text": "Land use (LU) maps are an important source of information in academia and for policy-makers describing the usage of land parcels. A large amount of effort and monetary resources are spent on mapping LU features over time and at local, regional, and global scales. Remote sensing images and signal processing techniques, as well as land surveying are the prime sources to map LU features. However, both data gathering approaches are financially expensive and time consuming. But recently, Web 2.0 technologies and the wide dissemination of GPSenabled devices boosted public participation in collaborative mapping projects (CMPs). In this regard, the OpenStreetMap (OSM) project has been one of the most successful representatives, providing LU features. The main objective of this paper is to comparatively assess the accuracy of the contributed OSM-LU features in four German metropolitan areas versus the pan-European GMESUA dataset as a reference. Kappa index analysis along with per-class user’s and producers’ accuracies are used for accuracy assessment. The empirical findings suggest OSM as an alternative complementary source for extracting LU information whereas exceeding 50 % of the selected cities are mapped by mappers. Moreover, the results identify which land types preserve high/moderate/low accuracy across cities for urban LU mapping. The findings strength the potential of collaboratively collected LU J. Jokar Arsanjani (&) A. Zipf A. Schauss GIScience Research Group, Institute of Geography, Heidelberg University, 69120 Heidelberg, Germany e-mail: [email protected] A. Zipf e-mail: [email protected] A. Schauss e-mail: [email protected] P. Mooney Department of Computer Science, Maynooth University, Maynooth, Co. Kildare, Ireland e-mail: [email protected] © Springer International Publishing Switzerland 2015 J. Jokar Arsanjani et al. (eds.), OpenStreetMap in GIScience, Lecture Notes in Geoinformation and Cartography, DOI 10.1007/978-3-319-14280-7_3 37 features for providing temporal LU maps as well as updating/enriching existing inventories. Furthermore, such a collaborative approach can be used for collecting a global coverage of LU information specifically in countries in which temporal and monetary efforts could be minimized.",
"title": ""
},
{
"docid": "8af61009253af61dd6d4daf0ad4be30c",
"text": "Forensic anthropologists often rely on the state of decomposition to estimate the postmortem interval (PMI) in a human remains case. The state of decomposition can provide much information about the PMI, especially when decomposition is treated as a semi-continuous variable and used in conjunction with accumulated-degree-days (ADD). This preliminary study demonstrates a supplemental method of determining the PMI based on scoring decomposition using a point-based system and taking into account temperatures in which the remains were exposed. This project was designed to examine the ways that forensic anthropologists could improve their PMI estimates based on decomposition by using a more quantitative approach. A total of 68 human remains cases with a known date of death were scored for decomposition and a regression equation was calculated to predict ADD from decomposition score. ADD accounts for approximately 80% of the variation in decomposition. This study indicates that decomposition is best modeled as dependent on accumulated temperature, not just time.",
"title": ""
},
{
"docid": "a6d0c3a9ca6c2c4561b868baa998dace",
"text": "Diprosopus or duplication of the lower lip and mandible is a very rare congenital anomaly. We report this unusual case occurring in a girl who presented to our hospital at the age of 4 months. Surgery and problems related to this anomaly are discussed.",
"title": ""
},
{
"docid": "18c507d6624f153cb1b7beaf503b0d54",
"text": "The critical period hypothesis for language acquisition (CP) proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. The CP hypothesis was originally proposed for spoken language but recent research has shown that it applies equally to sign language. This paper summarizes a series of experiments designed to investigate whether and how the CP affects the outcome of sign language acquisition. The results show that the CP has robust effects on the development of sign language comprehension. Effects are found at all levels of linguistic structure (phonology, morphology and syntax, the lexicon and semantics) and are greater for first as compared to second language acquisition. In addition, CP effects have been found on all measures of language comprehension examined to date, namely, working memory, narrative comprehension, sentence memory and interpretation, and on-line grammatical processing. The nature of these effects with respect to a model of language comprehension is discussed.",
"title": ""
},
{
"docid": "d308842b1684f4c7c6c499376c6b5d02",
"text": "A picture is worth one thousand words, but what words should be used to describe the sentiment and emotions conveyed in the increasingly popular social multimedia? We demonstrate a novel system which combines sound structures from psychology and the folksonomy extracted from social multimedia to develop a large visual sentiment ontology consisting of 1,200 concepts and associated classifiers called SentiBank. Each concept, defined as an Adjective Noun Pair (ANP), is made of an adjective strongly indicating emotions and a noun corresponding to objects or scenes that have a reasonable prospect of automatic detection. We believe such large-scale visual classifiers offer a powerful mid-level semantic representation enabling high-level sentiment analysis of social multimedia. We demonstrate novel applications made possible by SentiBank including live sentiment prediction of social media and visualization of visual content in a rich intuitive semantic space.",
"title": ""
},
{
"docid": "f5d769be1305755fe0753d1e22cbf5c9",
"text": "The number of malware is increasing rapidly and a lot of malware use stealth techniques such as encryption to evade pattern matching detection by anti-virus software. To resolve the problem, behavior based detection method which focuses on malicious behaviors of malware have been researched. Although they can detect unknown and encrypted malware, they suffer a serious problem of false positives against benign programs. For example, creating files and executing them are common behaviors performed by malware, however, they are also likely performed by benign programs thus it causes false positives. In this paper, we propose a malware detection method based on evaluation of suspicious process behaviors on Windows OS. To avoid false positives, our proposal focuses on not only malware specific behaviors but also normal behavior that malware would usually not do. Moreover, we implement a prototype of our proposal to effectively analyze behaviors of programs. Our evaluation experiments using our malware and benign program datasets show that our malware detection rate is about 60% and it does not cause any false positives. Furthermore, we compare our proposal with completely behavior-based anti-virus software. Our results show that our proposal puts few burdens on users and reduces false positives.",
"title": ""
},
{
"docid": "56667d286f69f8429be951ccf5d61c24",
"text": "As the Internet of Things (IoT) is emerging as an attractive paradigm, a typical IoT architecture that U2IoT (Unit IoT and Ubiquitous IoT) model has been presented for the future IoT. Based on the U2IoT model, this paper proposes a cyber-physical-social based security architecture (IPM) to deal with Information, Physical, and Management security perspectives, and presents how the architectural abstractions support U2IoT model. In particular, 1) an information security model is established to describe the mapping relations among U2IoT, security layer, and security requirement, in which social layer and additional intelligence and compatibility properties are infused into IPM; 2) physical security referring to the external context and inherent infrastructure are inspired by artificial immune algorithms; 3) recommended security strategies are suggested for social management control. The proposed IPM combining the cyber world, physical world and human social provides constructive proposal towards the future IoT security and privacy protection.",
"title": ""
},
{
"docid": "6f0faf1a90d9f9b19fb2e122a26a0f77",
"text": "Social media shatters the barrier to communicate anytime anywhere for people of all walks of life. The publicly available, virtually free information in social media poses a new challenge to consumers who have to discern whether a piece of information published in social media is reliable. For example, it can be difficult to understand the motivations behind a statement passed from one user to another, without knowing the person who originated the message. Additionally, false information can be propagated through social media, resulting in embarrassment or irreversible damages. Provenance data associated with a social media statement can help dispel rumors, clarify opinions, and confirm facts. However, provenance data about social media statements is not readily available to users today. Currently, providing this data to users requires changing the social media infrastructure or offering subscription services. Taking advantage of social media features, research in this nascent field spearheads the search for a way to provide provenance data to social media users, thus leveraging social media itself by mining it for the provenance data. Searching for provenance data reveals an interesting problem space requiring the development and application of new metrics in order to provide meaningful provenance data to social media users. This lecture reviews the current research on information provenance, explores exciting research opportunities to address pressing needs, and shows how data mining can enable a social media user to make informed judgements about statements published in social media.",
"title": ""
},
{
"docid": "d63a760289c7ecb903ad26db7b0b838d",
"text": "A new gain linearized varactor bank suitable for wideband voltage-controlled oscillators (VCOs) is presented in this paper. The VCO tuning gain linearized techniques, namely the gain variation compensation and linear tuning range extension techniques, are used in the proposed varactor bank to achieve a further reduced VCO tuning gain with low variation. The phase noise from Amplitude Modulation to Phase Modulation up conversion is considerably improved thanks to the reduced VCO tuning gain. Fabricated in a 0.18-μm CMOS technology, a 3-bits VCO prototype employing the proposed varactor bank achieves <;5% gain variation at the output frequency from 4.1 to 5 GHz, and exhibits maximum power consumption of 7.2 mW at its peak frequency, 5 GHz.",
"title": ""
},
{
"docid": "558ff9db3b8f32d8d91f8eb80b61b597",
"text": "Our paper focuses on providing information about plant diseases and prevention methods. Plants have become an important source of energy, and are a fundamental piece of the puzzle to solve the problem of global warming. There are many types of diseases which are present in plants. Diseases weaken trees and shrubs by interrupting chemical change, the method by that plants produce energy that sustains growth and defense systems and influences survival. This paper presents an improved method for plant disease detection using an adaptive approach. This approach helps to increase the accuracy of the disease level, it provides various prevention method (type and amount of pesticides to be used), the level of destruction and helps to check whether the disease spreads or not.",
"title": ""
},
{
"docid": "29199ac45d4aa8035fd03e675406c2cb",
"text": "This work presents an autonomous mobile robot in order to cover an unknown terrain “randomly”, namely entirely, unpredictably and evenly. This aim is very important, especially in military missions, such as the surveillance of terrains, the terrain exploration for explosives and the patrolling for intrusion in military facilities. The “heart” of the proposed robot is a chaotic motion controller, which is based on a chaotic true random bit generator. This generator has been implemented with a microcontroller, which converts the produced chaotic bit sequence, to the robot's motion. Experimental results confirm that this approach, with an appropriate sensor for obstacle avoidance, can obtain very satisfactory results in regard to the fast scanning of the robot’s workspace with unpredictable way. Key-Words: Autonomous mobile robot, terrain coverage, microcontroller, random bit generator, nonlinear system, chaos, Logistic map.",
"title": ""
},
{
"docid": "9f53016723d5064e3790cd316399e082",
"text": "We investigated the processing effort during visual search and counting tasks using a pupil dilation measure. Search difficulty was manipulated by varying the number of distractors as well as the heterogeneity of the distractors. More difficult visual search resulted in more pupil dilation than did less difficult search. These results confirm a link between effort and increased pupil dilation. The pupil dilated more during the counting task than during target-absent search, even though the displays were identical, and the two tasks were matched for reaction time. The moment-to-moment dilation pattern during search suggests little effort in the early stages, but increasingly more effort towards response, whereas the counting task involved an increased initial effort, which was sustained throughout the trial. These patterns can be interpreted in terms of the differential memory load for item locations in each task. In an additional experiment, increasing the spatial memory requirements of the search evoked a corresponding increase in pupil dilation. These results support the view that search tasks involve some, but limited, memory for item locations, and the effort associated with this memory load increases during the trials. In contrast, counting involves a heavy locational memory component from the start.",
"title": ""
},
{
"docid": "d732cee77a19d6ab71dd5cc2828333a1",
"text": "Biclustering algorithms, which aim to provide an effective and efficient way to analyze gene expression data by finding a group of genes with trend-preserving expression patterns under certain conditions, have been widely developed since Morgan et al. pioneered a work about partitioning a data matrix into submatrices with approximately constant values. However, the identification of general trend-preserving biclusters which are the most meaningful substructures hidden in gene expression data remains a highly challenging problem. We found an elementary method by which biologically meaningful trend-preserving biclusters can be readily identified from noisy and complex large data. The basic idea is to apply the longest common subsequence (LCS) framework to selected pairs of rows in an index matrix derived from an input data matrix to locate a seed for each bicluster to be identified. We tested it on synthetic and real datasets and compared its performance with currently competitive biclustering tools. We found that the new algorithm, named UniBic, outperformed all previous biclustering algorithms in terms of commonly used evaluation scenarios except for BicSPAM on narrow biclusters. The latter was somewhat better at finding narrow biclusters, the task for which it was specifically designed.",
"title": ""
},
{
"docid": "922cc239f2511801da980620aa87ee94",
"text": "Alloying is an effective way to engineer the band-gap structure of two-dimensional transition-metal dichalcogenide materials. Molybdenum and tungsten ditelluride alloyed with sulfur or selenium layers (MX2xTe2(1-x), M = Mo, W and X = S, Se) have a large band-gap tunability from metallic to semiconducting due to the 2H-to-1T' phase transition as controlled by the alloy concentrations, whereas the alloy atom distribution in these two phases remains elusive. Here, combining atomic resolution Z-contrast scanning transmission electron microscopy imaging and density functional theory (DFT), we discovered that anisotropic ordering occurs in the 1T' phase, in sharp contrast to the isotropic alloy behavior in the 2H phase under similar alloy concentration. The anisotropic ordering is presumably due to the anisotropic bonding in the 1T' phase, as further elaborated by DFT calculations. Our results reveal the atomic anisotropic alloyed behavior in 1T' phase layered alloys regardless of their alloy concentration, shining light on fine-tuning their physical properties via engineering the alloyed atomic structure.",
"title": ""
}
] |
scidocsrr
|
dec66cf205e2edd5d7c64b066f02b62b
|
42 Variability Bugs in the Linux Kernel: a Qualitative Analysis
|
[
{
"docid": "2fc7b4f4763d094462f13688b473d370",
"text": "Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a family of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, model checking, and theorem proving, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis. The emerging field of product-line analyses is both broad and diverse, so it is difficult for researchers and practitioners to understand their similarities and differences. We propose a classification of product-line analyses to enable systematic research and application. Based on our insights with classifying and comparing a corpus of 123 research articles, we develop a research agenda to guide future research on product-line analyses.",
"title": ""
}
] |
[
{
"docid": "7d44a9227848baaf54b9bfb736727551",
"text": "Introduction: The causal relation between tongue thrust swallowing or habit and development of anterior open bite continues to be made in clinical orthodontics yet studies suggest a lack of evidence to support a cause and effect. Treatment continues to be directed towards closing the anterior open bite frequently with surgical intervention to reposition the maxilla and mandible. This case report illustrates a highly successful non-surgical orthodontic treatment without extractions.",
"title": ""
},
{
"docid": "cabdfcf94607adef9b07799aab463d64",
"text": "Monitoring the health of the elderly living independently in their own homes is a key issue in building sustainable healthcare models which support a country's ageing population. Existing approaches have typically proposed remotely monitoring the behaviour of a household's occupants through the use of additional sensors. However the costs and privacy concerns of such sensors have significantly limited their potential for widespread adoption. In contrast, in this paper we propose an approach which detects Activities of Daily Living, which we use as a proxy for the health of the household residents. Our approach detects appliance usage from existing smart meter data, from which the unique daily routines of the household occupants are learned automatically via a log Gaussian Cox process. We evaluate our approach using two real-world data sets, and show it is able to detect over 80% of kettle uses while generating less than 10% false positives. Furthermore, our approach allows earlier interventions in households with a consistent routine and fewer false alarms in the remaining households, relative to a fixed-time intervention benchmark.",
"title": ""
},
{
"docid": "6a85677755a82b147cb0874ae8299458",
"text": "Data mining involves the process of recovering related, significant and credential information from a large collection of aggregated data. A major area of current research in data mining is the field of clinical investigations that involve disease diagnosis, prognosis and drug therapy. The objective of this paper is to identify an efficient classifier for prognostic breast cancer data. This research work involves designing a data mining framework that incorporates the task of learning patterns and rules that will facilitate the formulation of decisions in new cases. The machine learning techniques employed to train the proposed system are based on feature relevance analysis and classification algorithms. Wisconsin Prognostic Breast Cancer (WPBC) data from the UCI machine learning repository is utilized by means of data mining techniques to completely train the system on 198 individual cases, each comprising of 33 predictor values. This paper highlights the performance of feature reduction and classification algorithms on the training dataset. We evaluate the number of attributes for split in the Random tree algorithm and the confidence level and minimum size of the leaves in the C4.5 algorithm to produce 100 percent classification accuracy. Our results demonstrate that Random Tree and Quinlan’s C4.5 classification algorithm produce 100 percent accuracy in the training and test phase of classification with proper evaluation of algorithmic parameters.",
"title": ""
},
{
"docid": "7fb9cb7cb777d7f245b2444cd2cd4f9d",
"text": "Several recent studies have introduced lightweight versions of Java: reduced languages in which complex features like threads and reflection are dropped to enable rigorous arguments about key properties such as type safety. We carry this process a step further, omitting almost all features of the full language (including interfaces and even assignment) to obtain a small calculus, Featherweight Java, for which rigorous proofs are not only possible but easy. Featherweight Java bears a similar relation to Java as the lambda-calculus does to languages such as ML and Haskell. It offers a similar computational \"feel,\" providing classes, methods, fields, inheritance, and dynamic typecasts with a semantics closely following Java's. A proof of type safety for Featherweight Java thus illustrates many of the interesting features of a safety proof for the full language, while remaining pleasingly compact. The minimal syntax, typing rules, and operational semantics of Featherweight Java make it a handy tool for studying the consequences of extensions and variations. As an illustration of its utility in this regard, we extend Featherweight Java with generic classes in the style of GJ (Bracha, Odersky, Stoutamire, and Wadler) and give a detailed proof of type safety. The extended system formalizes for the first time some of the key features of GJ.",
"title": ""
},
{
"docid": "0c7636279e14e75ce44e01f3cbd90de6",
"text": "Neural abstractive summarization has been increasingly studied, where the prior work mainly focused on summarizing single-speaker documents (news, scientific publications, etc). In dialogues, there are diverse interactive patterns between speakers, which are usually defined as dialogue acts. The interactive signals may provide informative cues for better summarizing dialogues. This paper proposes to explicitly leverage dialogue acts in a neural summarization model, where a sentence-gated mechanism is designed for modeling the relationships between dialogue acts and the summary. The experiments show that our proposed model significantly improves the abstractive summarization performance compared to the state-of-the-art baselines on the AMI meeting corpus, demonstrating the usefulness of the interactive signal provided by dialogue acts.1",
"title": ""
},
{
"docid": "2cc7e23666cdd2cd1ce13c7536269955",
"text": "Based on requirements of modern vehicle, invehicle Controller Area Network (CAN) architecture has been implemented. In order to reduce point to point wiring harness in vehicle automation, CAN is suggested as a means for data communication within the vehicle environment. The benefits of CAN bus based network over traditional point to point schemes will offer increased flexibility and expandability for future technology insertions. This paper describes the ARM7 based design and implementation of CAN Bus prototype for vehicle automation. It focus on hardware and software design of intelligent node. Hardware interface circuit mainly consists of MCP2515 stand alone CAN-Controller with SPI interface, LPC2148 microcontroller based on 32-bit ARM7 TDMI-S CPU and MCP2551 high speed CAN Transceiver. MCP2551 CAN Transceiver implements ISO-11898 standard physical layer requirements. The software design for CAN bus network are mainly the design of CAN bus data communication between nodes, and data processing for analog signals. The design of software communication module includes system initialization and CAN controller initialization unit, message sending unit, message receiving unit and the interrupt service unit. Keywords—Vehicle Automation, Controller Area Network (CAN), Electronic Control Unit (ECU), CANopen, LIN, SAE J1939.",
"title": ""
},
{
"docid": "93c819e7fa80de9e059cc564badec5fa",
"text": "The ARRAU corpus is an anaphorically annotated corpus of English providing rich linguistic information about anaphora resolution. The most distinctive feature of the corpus is the annotation of a wide range of anaphoric relations, including bridging references and discourse deixis in addition to identity (coreference). Other distinctive features include treating all NPs as markables, including nonreferring NPs; and the annotation of a variety of morphosyntactic and semantic mention and entity attributes, including the genericity status of the entities referred to by markables. The corpus however has not been extensively used for anaphora resolution research so far. In this paper, we discuss three datasets extracted from the ARRAU corpus to support the three subtasks of the CRAC 2018 Shared Task– identity anaphora resolution over ARRAU-style markables, bridging references resolution, and discourse deixis; the evaluation scripts assessing system performance on those datasets; and preliminary results on these three tasks that may serve as baseline for subsequent research in these phenomena.",
"title": ""
},
{
"docid": "5bde44a162fa6259ece485b4319b56a4",
"text": "3D reconstruction from single view images is an ill-posed problem. Inferring the hidden regions from self-occluded images is both challenging and ambiguous. We propose a two-pronged approach to address these issues. To better incorporate the data prior and generate meaningful reconstructions, we propose 3D-LMNet, a latent embedding matching approach for 3D reconstruction. We first train a 3D point cloud auto-encoder and then learn a mapping from the 2D image to the corresponding learnt embedding. To tackle the issue of uncertainty in the reconstruction, we predict multiple reconstructions that are consistent with the input view. This is achieved by learning a probablistic latent space with a novel view-specific ‘diversity loss’. Thorough quantitative and qualitative analysis is performed to highlight the significance of the proposed approach. We outperform state-of-the-art approaches on the task of single-view 3D reconstruction on both real and synthetic datasets while generating multiple plausible reconstructions, demonstrating the generalizability and utility of our approach.",
"title": ""
},
{
"docid": "22d878a735d649f5932be6cd0b3979c9",
"text": "This study investigates the potential to introduce basic programming concepts to middle school children within the context of a classroom writing-workshop. In this paper we describe how students drafted, revised, and published their own digital stories using the introductory programming language Scratch and in the process learned fundamental CS concepts as well as the wider connection between programming and writing as interrelated processes of composition.",
"title": ""
},
{
"docid": "8a79e13744c0e68ec00dca8f7f1c1b61",
"text": "As signature continues to play a crucial part in personal identification for number of applications including financial transaction, an efficient signature authentication system becomes more and more important. Various researches in the field of signature authentication has been dynamically pursued for many years and its extent is still being explored. Signature verification is the process which is carried out to determine whether a given signature is genuine or forged. It can be distinguished into two types such as the Online and the Offline. In this paper we presented the Offline signature verification system and extracted some new local and geometric features like QuadSurface feature, Area ratio, Distance ratio etc. For this we have taken some genuine signatures from 5 different persons and extracted the features from all of the samples after proper preprocessing steps. The training phase uses Gaussian Mixture Model (GMM) technique to obtain a reference model for each signature sample of a particular user. By computing Euclidian distance between reference signature and all the training sets of signatures, acceptance range is defined. If the Euclidian distance of a query signature is within the acceptance range then it is detected as an authenticated signature else, a forged signature.",
"title": ""
},
{
"docid": "e278b7b7cc79bd34f7c9abd7053f2ec2",
"text": "The Quadrant Based Method (QUADBAM) for Global Motion Estimation introduced here exploits local motion fields. QUADBAM produces similar results to other global motion estimation methods with a lower algorithmic complexity. Two applications of QUADBAM in the treatment of broadcast cricket highlights are presented. An automated highlight detector for broadcast cricket, based solely on QUADBAM generated global motion features, provides acceptable recall rates. Mosaic visualization of individual cricket highlights, created using QUADBAM generated global motion features, are suitable for highlight summaries.",
"title": ""
},
{
"docid": "1ccc1b904fa58b1e31f4f3f4e2d76707",
"text": "When children and adolescents are the target population in dietary surveys many different respondent and observer considerations surface. The cognitive abilities required to self-report food intake include an adequately developed concept of time, a good memory and attention span, and a knowledge of the names of foods. From the age of 8 years there is a rapid increase in the ability of children to self-report food intake. However, while cognitive abilities should be fully developed by adolescence, issues of motivation and body image may hinder willingness to report. Ten validation studies of energy intake data have demonstrated that mis-reporting, usually in the direction of under-reporting, is likely. Patterns of under-reporting vary with age, and are influenced by weight status and the dietary survey method used. Furthermore, evidence for the existence of subject-specific responding in dietary assessment challenges the assumption that repeated measurements of dietary intake will eventually obtain valid data. Unfortunately, the ability to detect mis-reporters, by comparison with presumed energy requirements, is limited unless detailed activity information is available to allow the energy intake of each subject to be evaluated individually. In addition, high variability in nutrient intakes implies that, if intakes are valid, prolonged dietary recording will be required to rank children correctly for distribution analysis. Future research should focus on refining dietary survey methods to make them more sensitive to different ages and cognitive abilities. The development of improved techniques for identification of mis-reporters and investigation of the issue of differential reporting of foods should also be given priority.",
"title": ""
},
{
"docid": "6e01d0d9b403f8bae201baa68e04fece",
"text": "OBJECTIVE\nTo apply a mathematical model to determine the relative effectiveness of various tip-plasty maneuvers while the lateral crura are in cephalic position compared with orthotopic position.\n\n\nMETHODS\nA Matlab (MathWorks, Natick, Massachusetts) computer program, called the Tip-Plasty Simulator, was developed to model the medial and lateral crura of the tripod concept in order to estimate the change in projection, rotation, and nasal length yielded by changes in crural length. The following rhinoplasty techniques were modeled in the software program: columellar strut graft/tongue-in-groove, lateral crural steal, lateral crural overlay, medial/intermediate crural overlay, hinge release with alar strut graft, and lateral crural repositioning.\n\n\nRESULTS\nUsing the Tip-Plasty Simulator, the directionality of the change in projection, rotation, and nasal length produced by the various tip-plasty maneuvers, as shown by our mathematical model, is largely the same as that expected and observed clinically. Notably, cephalically positioned lateral crura affected the results of the rhinoplasty maneuvers studied.\n\n\nCONCLUSIONS\nBy demonstrating a difference in the magnitude of change resulting from various rhinoplasty maneuvers, the results of this study enhance the ability of the rhinoplasty surgeon to predict the effects of various tip-plasty maneuvers, given the variable range in alar cartilage orientation that he or she is likely to encounter.",
"title": ""
},
{
"docid": "e79db51ac85ceafba66dddd5c038fbdf",
"text": "Machine learning based anti-phishing techniques are based on various features extracted from different sources. These features differentiate a phishing website from a legitimate one. Features are taken from various sources like URL, page content, search engine, digital certificate, website traffic, etc, of a website to detect it as a phishing or non-phishing. The websites are declared as phishing sites if the heuristic design of the websites matches with the predefined rules. The accuracy of the anti-phishing solution depends on features set, training data and machine learning algorithm. This paper presents a comprehensive analysis of Phishing attacks, their exploitation, some of the recent machine learning based approaches for phishing detection and their comparative study. It provides a better understanding of the phishing problem, current solution space in machine learning domain, and scope of future research to deal with Phishing attacks efficiently using machine learning based approaches.",
"title": ""
},
{
"docid": "cb561e56e60ba0e5eef2034158c544c2",
"text": "Android is a modern and popular software platform for smartphones. Among its predominant features is an advanced security model which is based on application-oriented mandatory access control and sandboxing. This allows developers and users to restrict the execution of an application to the privileges it has (mandatorily) assigned at installation time. The exploitation of vulnerabilities in program code is hence believed to be confined within the privilege boundaries of an application’s sandbox. However, in this paper we show that a privilege escalation attack is possible. We show that a genuine application exploited at runtime or a malicious application can escalate granted permissions. Our results immediately imply that Android’s security model cannot deal with a transitive permission usage attack and Android’s sandbox model fails as a last resort against malware and sophisticated runtime attacks.",
"title": ""
},
{
"docid": "a0f46c67118b2efec2bce2ecd96d11d6",
"text": "This paper describes the implementation of a service to identify and geo-locate real world events that may be present as social activity signals in two different social networks. Specifically, we focus on content shared by users on Twitter and Instagram in order to design a system capable of fusing data across multiple networks. Past work has demonstrated that it is indeed possible to detect physical events using various social network platforms. However, many of these signals need corroboration in order to handle events that lack proper support within a single network. We leverage this insight to design an unsupervised approach that can correlate event signals across multiple social networks. Our algorithm can detect events and identify the location of the event occurrence. We evaluate our algorithm using both simulations and real world datasets collected using Twitter and Instagram. The results indicate that our algorithm significantly improves false positive elimination and attains high precision compared to baseline methods on real world datasets.",
"title": ""
},
{
"docid": "c84d41e54b12cca847135dfc2e9e13f8",
"text": "PURPOSE\nBaseline restraint prevalence for surgical step-down unit was 5.08%, and for surgical intensive care unit, it was 25.93%, greater than the National Database of Nursing Quality Indicators (NDNQI) mean. Project goal was sustained restraint reduction below the NDNQI mean and maintaining patient safety.\n\n\nBACKGROUND/RATIONALE\nSoft wrist restraints are utilized for falls reduction and preventing device removal but are not universally effective and may put patients at risk of injury. Decreasing use of restrictive devices enhances patient safety and decreases risk of injury.\n\n\nDESCRIPTION\nPhase 1 consisted of advanced practice nurse-facilitated restraint rounds on each restrained patient including multidisciplinary assessment and critical thinking with bedside clinicians including reevaluation for treatable causes of agitation and restraint indications. Phase 2 evaluated less restrictive mitts, padded belts, and elbow splint devices. Following a 4-month trial, phase 3 expanded the restraint initiative including critical care requiring education and collaboration among advanced practice nurses, physician team members, and nurse champions.\n\n\nEVALUATION AND OUTCOMES\nPhase 1 decreased surgical step-down unit restraint prevalence from 5.08% to 3.57%. Phase 2 decreased restraint prevalence from 3.57% to 1.67%, less than the NDNQI mean. Phase 3 expansion in surgical intensive care units resulted in wrist restraint prevalence from 18.19% to 7.12% within the first year, maintained less than the NDNQI benchmarks while preserving patient safety.\n\n\nINTERPRETATION/CONCLUSION\nThe initiative produced sustained reduction in acute/critical care well below the NDNQI mean without corresponding increase in patient medical device removal.\n\n\nIMPLICATIONS\nBy managing causes of agitation, need for restraints is decreased, protecting patients from injury and increasing patient satisfaction. Follow-up research may explore patient experiences with and without restrictive device use.",
"title": ""
},
{
"docid": "a69600725f25e0e927f8ddeb1d30f99d",
"text": "Island conservation in the longer term Conservation of biodiversity on islands is important globally because islands are home to more than 20% of the terrestrial plant and vertebrate species in the world, within less than 5% of the global terrestrial area. Endemism on islands is a magnitude higher than on continents [1]; ten of the 35 biodiversity hotspots in the world are entirely, or largely consist of, islands [2]. Yet this diversity is threatened: over half of all recent extinctions have occurred on islands, which currently harbor over one-third of all terrestrial species facing imminent extinction [3] (Figure 1). In response to the biodiversity crisis, island conservation has been an active field of research and action. Hundreds of invasive species eradications and endangered species translocations have been successfully completed [4–6]. However, despite climate change being an increasing research focus generally, its impacts on island biodiversity are only just beginning to be investigated. For example, invasive species eradications on islands have been prioritized largely by threats to native biodiversity, eradication feasibility, economic cost, and reinvasion potential, but have never considered the threat of sea-level rise. Yet, the probability and extent of island submersion would provide a relevant metric for the longevity of long-term benefits of such eradications.",
"title": ""
},
{
"docid": "afe1be9e13ca6e2af2c5177809e7c893",
"text": "Scar evaluation and revision techniques are chief among the most important skills in the facial plastic and reconstructive surgeon’s armamentarium. Often minimized in importance, these techniques depend as much on a thorough understanding of facial anatomy and aesthetics, advanced principles of wound healing, and an appreciation of the overshadowing psychological trauma as they do on thorough technical analysis and execution [1,2]. Scar revision is unique in the spectrum of facial plastic and reconstructive surgery because the initial traumatic event and its immediate treatment usually cannot be controlled. Patients who are candidates for scar revision procedures often present after significant loss of regional tissue, injury that crosses anatomically distinct facial aesthetic units, wound closure by personnel less experienced in plastic surgical technique, and poor post injury wound management [3,4]. While no scar can be removed completely, plastic surgeons can often improve the appearance of a scar, making it less obvious through the injection or application of certain steroid medications or through surgical procedures known as scar revisions.There are many variables affect the severity of scarring, including the size and depth of the wound, blood supply to the area, the thickness and color of your skin, and the direction of the scar [5,6].",
"title": ""
},
{
"docid": "f78652ff55bc5ae570d82fd1893fc5a3",
"text": "In this paper, we propose an alternative method to estimate room layouts of cluttered indoor scenes. This method enjoys the benefits of two novel techniques. The first one is semantic transfer (ST), which is: (1) a formulation to integrate the relationship between scene clutter and room layout into convolutional neural networks, (2) an architecture that can be end-to-end trained, (3) a practical strategy to initialize weights for very deep networks under unbalanced training data distribution. ST allows us to extract highly robust features under various circumstances, and in order to address the computation redundance hidden in these features we develop a principled and efficient inference scheme named physics inspired optimization (PIO). PIOs basic idea is to formulate some phenomena observed in ST features into mechanics concepts. Evaluations on public datasets LSUN and Hedau show that the proposed method is more accurate than state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
ee802d782e9a88c98e0f97c06da87cd7
|
Fast Decoding in Sequence Models using Discrete Latent Variables
|
[
{
"docid": "cff671af6a7a170fac2daf6acd9d1e3e",
"text": "We show how to learn a deep graphical model of the word-count vectors obtained from a large set of documents. The values of the latent variables in the deepest layer are easy to infer and gi ve a much better representation of each document than Latent Sem antic Analysis. When the deepest layer is forced to use a small numb er of binary variables (e.g. 32), the graphical model performs “semantic hashing”: Documents are mapped to memory addresses in such a way that semantically similar documents are located at near by ddresses. Documents similar to a query document can then be fo und by simply accessing all the addresses that differ by only a fe w bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much fa ster than locality sensitive hashing, which is the fastest curre nt method. By using semantic hashing to filter the documents given to TFID , we achieve higher accuracy than applying TF-IDF to the entir document set.",
"title": ""
},
{
"docid": "5e601792447020020aa02ee539b3a2cf",
"text": "The recently proposed neural network joint model (NNJM) (Devlin et al., 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT. In this paper, we give a more systematic treatment by summarizing the relevant source information through a convolutional architecture guided by the target information. With different guiding signals during decoding, our specifically designed convolution+gating architectures can pinpoint the parts of a source sentence that are relevant to predicting a target word, and fuse them with the context of entire source sentence to form a unified representation. This representation, together with target language words, are fed to a deep neural network (DNN) to form a stronger NNJM. Experiments on two NIST Chinese-English translation tasks show that the proposed model can achieve significant improvements over the previous NNJM by up to +1.01 BLEU points on average.",
"title": ""
}
] |
[
{
"docid": "5d36e3685f024ffcb7a9856f49c4e717",
"text": "§ Action (term-pair) Representation • Dependency Paths between x, y • Wx: Word Embedding of x • Wy: Word Embedding of y • F(x, y): Surface (Ends with, Contains, etc.), Frequency (pattern-based co-occur info), and Generality (edge not too general or narrow) Features § Performance Study on End-to-End Taxonomy Induction: • WordNet (533/144/144 taxonomies for training, validation, and test set, size (10, 50], depth=4, animals, daily necessities, etc.) § Compared methods: • TAXI [3]: pattern-based method that ranked 1st in the SemEval-2016 Task 13 competition • HypeNET [4]: state-of-the-art hypernymy detection method • HypeNET + MST (maximum spanning tree): post-processing of HypeNET to prune the hypernym graph into a tree • Bansal et al. (2014) [1]: state-of-the-art taxonomy induction method • SubSeq [2]: state-of-the-art results on the SemEval-2016 Task 13 • Taxo-RL (RE, with virtual root embedding), Taxo-RL (NR, with new root addition), Taxo-RL (NR) + FG (with frequency and generality features) • Taxo-RL (partial, allows partial taxonomy), Taxo-RL (full, has to use all terms in the vocabulary)",
"title": ""
},
{
"docid": "01cbe4a4f8cfb9e00bc19290462f38f2",
"text": "In 2008, the bilateral Japan-Philippines Economic Partnership Agreement took effect. Contained within this regional free trade agreement are unique provisions allowing exchange of Filipino nurses and healthcare workers to work abroad in Japan. Japan's increasing need for healthcare workers due to its aging demographic and the Philippines need for economic development could have led to shared benefits under the Japan-Philippines Economic Partnership Agreement. However, 4 years following program implementation, results have been disappointing, e.g., only 7% of candidates passing the programs requirements since 2009. These disappointing results represent a policy failure within the current Japan-Philippines Economic Partnership Agreement framework, and point to the need for reform. Hence, amending the current Japan-Philippines Economic Partnership Agreement structure by potentially adopting a USA based approach to licensure examinations and implementing necessary institutional and governance reform measures may be necessary to ensure beneficial healthcare worker migration for both countries.",
"title": ""
},
{
"docid": "68c1cf9be287d2ccbe8c9c2ed675b39e",
"text": "The primary task of the peripheral vasculature (PV) is to supply the organs and extremities with blood, which delivers oxygen and nutrients, and to remove metabolic waste products. In addition, peripheral perfusion provides the basis of local immune response, such as wound healing and inflammation, and furthermore plays an important role in the regulation of body temperature. To adequately serve its many purposes, blood flow in the PV needs to be under constant tight regulation, both on a systemic level through nervous and hormonal control, as well as by local factors, such as metabolic tissue demand and hydrodynamic parameters. As a matter of fact, the body does not retain sufficient blood volume to fill the entire vascular space, and only 25% of the capillary bed is in use during resting state. The importance of microvascular control is clearly illustrated by the disastrous effects of uncontrolled blood pooling in the extremities, such as occurring during certain types of shock. Peripheral vascular disease (PVD) is the general name for a host of pathologic conditions of disturbed PV function. Peripheral vascular disease includes occlusive diseases of the arteries and the veins. An example is peripheral arterial occlusive disease (PAOD), which is the result of a buildup of plaque on the inside of the arterial walls, inhibiting proper blood supply to the organs. Symptoms include pain and cramping in extremities, as well as fatigue; ultimately, PAOD threatens limb vitality. The PAOD is often indicative of atherosclerosis of the heart and brain, and is therefore associated with an increased risk of myocardial infarction or cerebrovascular accident (stroke). Venous occlusive disease is the forming of blood clots in the veins, usually in the legs. Clots pose a risk of breaking free and traveling toward the lungs, where they can cause pulmonary embolism. In the legs, thromboses interfere with the functioning of the venous valves, causing blood pooling in the leg (postthrombotic syndrome) that leads to swelling and pain. Other causes of disturbances in peripheral perfusion include pathologies of the autoregulation of the microvasculature, such as in Reynaud’s disease or as a result of diabetes. To monitor vascular function, and to diagnose and monitor PVD, it is important to be able to measure and evaluate basic vascular parameters, such as arterial and venous blood flow, arterial blood pressure, and vascular compliance. Many peripheral vascular parameters can be assessed with invasive or minimally invasive procedures. Examples are the use of arterial catheters for blood pressure monitoring and the use of contrast agents in vascular X ray imaging for the detection of blood clots. Although they are sensitive and accurate, invasive methods tend to be more cumbersome to use, and they generally bear a greater risk of adverse effects compared to noninvasive techniques. These factors, in combination with their usually higher cost, limit the use of invasive techniques as screening tools. Another drawback is their restricted use in clinical research because of ethical considerations. Although many of the drawbacks of invasive techniques are overcome by noninvasive methods, the latter typically are more challenging because they are indirect measures, that is, they rely on external measurements to deduce internal physiologic parameters. Noninvasive techniques often make use of physical and physiologic models, and one has to be mindful of imperfections in the measurements and the models, and their impact on the accuracy of results. Noninvasive methods therefore require careful validation and comparison to accepted, direct measures, which is the reason why these methods typically undergo long development cycles. Even though the genesis of many noninvasive techniques reaches back as far as the late nineteenth century, it was the technological advances of the second half of the twentieth century in such fields as micromechanics, microelectronics, and computing technology that led to the development of practical implementations. The field of noninvasive vascular measurements has undergone a developmental explosion over the last two decades, and it is still very much a field of ongoing research and development. This article describes the most important and most frequently used methods for noninvasive assessment of 234 PERIPHERAL VASCULAR NONINVASIVE MEASUREMENTS",
"title": ""
},
{
"docid": "89e8df51a72309dc99789f90e922d1c5",
"text": "Information is traditionally confined to paper or digitally to a screen. In this paper, we introduce WUW, a wearable gestural interface, which attempts to bring information out into the tangible world. By using a tiny projector and a camera mounted on a hat or coupled in a pendant like wearable device, WUW sees what the user sees and visually augments surfaces or physical objects the user is interacting with. WUW projects information onto surfaces, walls, and physical objects around us, and lets the user interact with the projected information through natural hand gestures, arm movements or interaction with the object itself.",
"title": ""
},
{
"docid": "65fac26fc29ff492eb5a3e43f58ecfb2",
"text": "The introduction of new anticancer drugs into the clinic is often hampered by a lack of qualified biomarkers. Method validation is indispensable to successful biomarker qualification and is also a regulatory requirement. Recently, the fit-for-purpose approach has been developed to promote flexible yet rigorous biomarker method validation, although its full implications are often overlooked. This review aims to clarify many of the scientific and regulatory issues surrounding biomarker method validation and the analysis of samples collected from clinical trial subjects. It also strives to provide clear guidance on validation strategies for each of the five categories that define the majority of biomarker assays, citing specific examples.",
"title": ""
},
{
"docid": "238f3288dc1523229c6bcc3337e233e6",
"text": "The increasing diffusion of smart devices, along with the dynamism of the mobile applications ecosystem, are boosting the production of malware for the Android platform. So far, many different methods have been developed for detecting Android malware, based on either static or dynamic analysis. The main limitations of existing methods include: low accuracy, proneness to evasion techniques, and weak validation, often limited to emulators or modified kernels. We propose an Android malware detection method, based on sequences of system calls, that overcomes these limitations. The assumption is that malicious behaviors (e.g., sending high premium rate SMS, cyphering data for ransom, botnet capabilities, and so on) are implemented by specific system calls sequences: yet, no apriori knowledge is available about which sequences are associated with which malicious behaviors, in particular in the mobile applications ecosystem where new malware and non-malware applications continuously arise. Hence, we use Machine Learning to automatically learn these associations (a sort of \"fingerprint\" of the malware); then we exploit them to actually detect malware. Experimentation on 20000 execution traces of 2000 applications (1000 of them being malware belonging to different malware families), performed on a real device, shows promising results: we obtain a detection accuracy of 97%. Moreover, we show that the proposed method can cope with the dynamism of the mobile apps ecosystem, since it can detect unknown malware.",
"title": ""
},
{
"docid": "4138f62dfaefe49dd974379561fb6fea",
"text": "For a set of 1D vectors, standard singular value decomposition (SVD) is frequently applied. For a set of 2D objects such as images or weather maps, we form 2DSVD, which computes principal eigenvectors of rowrow and column-column covariance matrices, exactly as in the standard SVD. We study optimality properties of 2DSVD as low-rank approximation and show that it provides a framework unifying two recent approaches. Experiments on images and weather maps illustrate the usefulness of 2DSVD.",
"title": ""
},
{
"docid": "7cecfd37e44b26a67bee8e9c7dd74246",
"text": "Forecasting hourly spot prices for real-time electricity usage is a challenging task. This paper investigates a series of forecasting methods to 90 and 180 days of load data collection acquired from the Iberian Electricity Market (MIBEL). This dataset was used to train and test multiple forecast models. The Mean Absolute Percentage Error (MAPE) for the proposed Hybrid combination of Auto Regressive Integrated Moving Average (ARIMA) and Generalized Linear Model (GLM) was compared against ARIMA, GLM, Random forest (RF) and Support Vector Machines (SVM) methods. The results indicate significant improvement in MAPE and correlation co-efficient values for the proposed hybrid ARIMA-GLM method.",
"title": ""
},
{
"docid": "f32ff72da2f90ed0e5279815b0fb10e0",
"text": "We investigate the application of non-orthogonal multiple access (NOMA) with successive interference cancellation (SIC) in downlink multiuser multiple-input multiple-output (MIMO) cellular systems, where the total number of receive antennas at user equipment (UE) ends in a cell is more than the number of transmit antennas at the base station (BS). We first dynamically group the UE receive antennas into a number of clusters equal to or more than the number of BS transmit antennas. A single beamforming vector is then shared by all the receive antennas in a cluster. We propose a linear beamforming technique in which all the receive antennas can significantly cancel the inter-cluster interference. On the other hand, the receive antennas in each cluster are scheduled on the power domain NOMA basis with SIC at the receiver ends. For inter-cluster and intra-cluster power allocation, we provide dynamic power allocation solutions with an objective to maximizing the overall cell capacity. An extensive performance evaluation is carried out for the proposed MIMO-NOMA system and the results are compared with those for conventional orthogonal multiple access (OMA)-based MIMO systems and other existing MIMO-NOMA solutions. The numerical results quantify the capacity gain of the proposed MIMO-NOMA model over MIMO-OMA and other existing MIMO-NOMA solutions.",
"title": ""
},
{
"docid": "e8b5fcac441c46e46b67ffbdd4b043e6",
"text": "We present DroidSafe, a static information flow analysis tool that reports potential leaks of sensitive information in Android applications. DroidSafe combines a comprehensive, accurate, and precise model of the Android runtime with static analysis design decisions that enable the DroidSafe analyses to scale to analyze this model. This combination is enabled by accurate analysis stubs, a technique that enables the effective analysis of code whose complete semantics lies outside the scope of Java, and by a combination of analyses that together can statically resolve communication targets identified by dynamically constructed values such as strings and class designators. Our experimental results demonstrate that 1) DroidSafe achieves unprecedented precision and accuracy for Android information flow analysis (as measured on a standard previously published set of benchmark applications) and 2) DroidSafe detects all malicious information flow leaks inserted into 24 real-world Android applications by three independent, hostile Red-Team organizations. The previous state-of-the art analysis, in contrast, detects less than 10% of these malicious flows.",
"title": ""
},
{
"docid": "61e16a9e53c2140d7c39694f83b603ac",
"text": "Object detection in videos is an important task in computer vision for various applications such as object tracking, video summarization and video search. Although great progress has been made in improving the accuracy of object detection in recent years due to the rise of deep neural networks, the state-of-the-art algorithms are highly computationally intensive. In order to address this challenge, we make two important observations in the context of videos: (i) Objects often occupy only a small fraction of the area in each video frame, and (ii) There is a high likelihood of strong temporal correlation between consecutive frames. Based on these observations, we propose Pack and Detect (PaD), an approach to reduce the computational requirements of object detection in videos. In PaD, only selected video frames called anchor frames are processed at full size. In the frames that lie between anchor frames (inter-anchor frames), regions of interest (ROIs) are identified based on the detections in the previous frame. We propose an algorithm to pack the ROIs of each inter-anchor frame together into a reduced-size frame. The computational requirements of the detector are reduced due to the lower size of the input. In order to maintain the accuracy of object detection, the proposed algorithm expands the ROIs greedily to provide additional background around each object to the detector. PaD can use any underlying neural network architecture to process the full-size and reduced-size frames. Experiments using the ImageNet video object detection dataset indicate that PaD can potentially reduce the number of FLOPS required for a frame by 4×. This leads to an overall increase in throughput of 1.25× on a 2.1 GHz Intel Xeon server with a NVIDIA Titan X GPU at the cost of 1.1% drop in accuracy.",
"title": ""
},
{
"docid": "a17e1bf423195ff66d73456f931fa5a1",
"text": "We propose a dialogue state tracker based on long short term memory (LSTM) neural networks. LSTM is an extension of a recurrent neural network (RNN), which can better consider distant dependencies in sequential input. We construct a LSTM network that receives utterances of dialogue participants as input, and outputs the dialogue state of the current utterance. The input utterances are separated into vectors of words with their orders, which are further converted to word embeddings to avoid sparsity problems. In experiments, we combined this system with the baseline system of the dialogue state tracking challenge (DSTC), and achieved improved dialogue state tracking accuracy.",
"title": ""
},
{
"docid": "9779a328b54e79a34191cec812ded633",
"text": "We present a novel approach to computational modeling of social interactions based on modeling of essential social interaction predicates (ESIPs) such as joint attention and entrainment. Based on sound social psychological theory and methodology, we collect a new “Tower Game” dataset consisting of audio-visual capture of dyadic interactions labeled with the ESIPs. We expect this dataset to provide a new avenue for research in computational social interaction modeling. We propose a novel joint Discriminative Conditional Restricted Boltzmann Machine (DCRBM) model that combines a discriminative component with the generative power of CRBMs. Such a combination enables us to uncover actionable constituents of the ESIPs in two steps. First, we train the DCRBM model on the labeled data and get accurate (76%-49% across various ESIPs) detection of the predicates. Second, we exploit the generative capability of DCRBMs to activate the trained model so as to generate the lower-level data corresponding to the specific ESIP that closely matches the actual training data (with mean square error 0.01-0.1 for generating 100 frames). We are thus able to decompose the ESIPs into their constituent actionable behaviors. Such a purely computational determination of how to establish an ESIP such as engagement is unprecedented.",
"title": ""
},
{
"docid": "7b5d2e7f1475997a49ed9fa820d565fe",
"text": "PURPOSE\nImplementations of health information technologies are notoriously difficult, which is due to a range of inter-related technical, social and organizational factors that need to be considered. In the light of an apparent lack of empirically based integrated accounts surrounding these issues, this interpretative review aims to provide an overview and extract potentially generalizable findings across settings.\n\n\nMETHODS\nWe conducted a systematic search and critique of the empirical literature published between 1997 and 2010. In doing so, we searched a range of medical databases to identify review papers that related to the implementation and adoption of eHealth applications in organizational settings. We qualitatively synthesized this literature extracting data relating to technologies, contexts, stakeholders, and their inter-relationships.\n\n\nRESULTS\nFrom a total body of 121 systematic reviews, we identified 13 systematic reviews encompassing organizational issues surrounding health information technology implementations. By and large, the evidence indicates that there are a range of technical, social and organizational considerations that need to be deliberated when attempting to ensure that technological innovations are useful for both individuals and organizational processes. However, these dimensions are inter-related, requiring a careful balancing act of strategic implementation decisions in order to ensure that unintended consequences resulting from technology introduction do not pose a threat to patients.\n\n\nCONCLUSIONS\nOrganizational issues surrounding technology implementations in healthcare settings are crucially important, but have as yet not received adequate research attention. This may in part be due to the subjective nature of factors, but also due to a lack of coordinated efforts toward more theoretically-informed work. Our findings may be used as the basis for the development of best practice guidelines in this area.",
"title": ""
},
{
"docid": "2d67465fbc2799f815237a05905b8d7a",
"text": "This paper presents a novel way to perform multi-modal face recognition. We use Partial Least Squares (PLS) to linearly map images in different modalities to a common linear subspace in which they are highly correlated. PLS has been previously used effectively for feature selection in face recognition. We show both theoretically and experimentally that PLS can be used effectively across modalities. We also formulate a generic intermediate subspace comparison framework for multi-modal recognition. Surprisingly, we achieve high performance using only pixel intensities as features. We experimentally demonstrate the highest published recognition rates on the pose variations in the PIE data set, and also show that PLS can be used to compare sketches to photos, and to compare images taken at different resolutions.",
"title": ""
},
{
"docid": "e55f8ad65250902a53b1bbfe6f16d26c",
"text": "Automatic key phrase extraction has many important applications including but not limited to summarization, cataloging/indexing, feature extraction for clustering and classification, and data mining. This paper presents a simple, yet effective algorithm (KP-Miner) for achieving this task. The result of an experiment carried out to investigate the effectiveness of this algorithm is also presented. In this experiment the devised algorithm is applied to six different datasets consisting of 481 documents. The results are then compared to two existing sophisticated machine learning based automatic keyphrase extraction systems. The results of this experiment show that the devised algorithm is comparable to both systems",
"title": ""
},
{
"docid": "17953a3e86d3a4396cbd8a911c477f07",
"text": "We introduce Deep Semantic Embedding (DSE), a supervised learning algorithm which computes semantic representation for text documents by respecting their similarity to a given query. Unlike other methods that use singlelayer learning machines, DSE maps word inputs into a lowdimensional semantic space with deep neural network, and achieves a highly nonlinear embedding to model the human perception of text semantics. Through discriminative finetuning of the deep neural network, DSE is able to encode the relative similarity between relevant/irrelevant document pairs in training data, and hence learn a reliable ranking score for a query-document pair. We present test results on datasets including scientific publications and user-generated knowledge base.",
"title": ""
},
{
"docid": "20c3addef683da760967df0c1e83f8e3",
"text": "An RF duplexer has been fabricated on a CMOS IC for use in 3G/4G cellular transceivers. The passive circuit sustains large voltage swings in the transmit path, and isolates the receive path from the transmitter by more than 45 dB across a bandwidth of 200 MHz in 3G/4G bands I, II, III, IV, and IX. A low noise amplifier embedded into the duplexer demonstrates a cascade noise figure of 5 dB with more than 27 dB of gain. The duplexer inserts 2.5 dB of loss between power amplifier and antenna.",
"title": ""
},
{
"docid": "9fc8d85122f1cf22e63ac2401531e448",
"text": "Recognizing multiple labels of images is a fundamental but challenging task in computer vision, and remarkable progress has been attained by localizing semantic-aware image regions and predicting their labels with deep convolutional neural networks. The step of hypothesis regions (region proposals) localization in these existing multi-label image recognition pipelines, however, usually takes redundant computation cost, e.g., generating hundreds of meaningless proposals with nondiscriminative information and extracting their features, and the spatial contextual dependency modeling among the localized regions are often ignored or over-simplified. To resolve these issues, this paper proposes a recurrent attention reinforcement learning framework to iteratively discover a sequence of attentional and informative regions that are related to different semantic objects and further predict label scores conditioned on these regions. Besides, our method explicitly models longterm dependencies among these attentional regions that help to capture semantic label co-occurrence and thus facilitate multilabel recognition. Extensive experiments and comparisons on two large-scale benchmarks (i.e., PASCAL VOC and MSCOCO) show that our model achieves superior performance over existing state-of-the-art methods in both performance and efficiency as well as explicitly identifying image-level semantic labels to specific object regions.",
"title": ""
},
{
"docid": "b13ccc915f81eca45048ffe9d5da5d4f",
"text": "Mobile robots are increasingly being deployed in the real world in response to a heightened demand for applications such as transportation, delivery and inspection. The motion planning systems for these robots are expected to have consistent performance across the wide range of scenarios that they encounter. While state-of-the-art planners, with provable worst-case guarantees, can be employed to solve these planning problems, their finite time performance varies across scenarios. This thesis proposes that the planning module for a robot must adapt its search strategy to the distribution of planning problems encountered to achieve real-time performance. We address three principal challenges of this problem. Firstly, we show that even when the planning problem distribution is fixed, designing a nonadaptive planner can be challenging as the performance of planning strategies fluctuates with small changes in the environment. We characterize the existence of complementary strategies and propose to hedge our bets by executing a diverse ensemble of planners. Secondly, when the distribution is varying, we require a meta-planner that can automatically select such an ensemble from a library of black-box planners. We show that greedily training a list of predictors to focus on failure cases leads to an effective meta-planner. For situations where we have no training data, we show that we can learn an ensemble on-the-fly by adopting algorithms from online paging theory. Thirdly, in the interest of efficiency, we require a white-box planner that directly adapts its search strategy during a planning cycle. We propose an efficient procedure for training adaptive search heuristics in a data-driven imitation learning framework. We also draw a novel connection to Bayesian active learning, and propose algorithms to adaptively evaluate edges of a graph. Our approach leads to the synthesis of a robust real-time planning module that allows a UAV to navigate seamlessly across environments and speed-regimes. We evaluate our framework on a spectrum of planning problems and show closed-loop results on 3 UAV platforms a full-scale autonomous helicopter, a large scale hexarotor and a small quadrotor. While the thesis was motivated by mobile robots, we have shown that the individual algorithms are broadly applicable to other problem domains such as informative path planning and manipulation planning. We also establish novel connections between the disparate fields of motion planning and active learning, imitation learning and online paging which opens doors to several new research problems.",
"title": ""
}
] |
scidocsrr
|
ae15bf744c09efc0b45d37c675b81a6f
|
Bayesian online clustering of eye movement data
|
[
{
"docid": "064373b19f13450d83c3c179405edffb",
"text": "Many machine vision applications, such as compression, pictorial database querying and image understanding, often need to analyze in detail only a representative subset of the image that may be arranged into sequence of loci called regions-of-interest, ROIs. We haveinvestigated and developed a methodology that serves to automaticaJly identify such a subset of aROIo (olgorithmically detected ROIs) using different image processing algorithms and appropriate clustering procedures. In human perception, an internal representation directs top-down, context-dependent sequences ofeye movements to fixate on similar sequencesof hROIs",
"title": ""
}
] |
[
{
"docid": "c591881de09c709ae2679cacafe24008",
"text": "This paper discusses a technique to estimate the position of a sniper using a spatial microphone array placed on elevated platforms. The shooter location is obtained from the exact location of the microphone array, from topographic information of the area and from an estimated direction of arrival (DoA) of the acoustic wave related to the explosion in the gun barrel, which is known as muzzle blast. The estimation of the DOA is based on the time differences the sound wavefront arrives at each pair of microphones, employing a technique known as Generalized Cross Correlation (GCC) with phase transform. The main idea behind the localization procedure used herein is that, based on the DoA, the acoustical path of the muzzle blast (from the weapon to the microphone) can be marked as a straight line on a terrain profile obtained from an accurate digital map, allowing the estimation of the shooter location whenever the microphone array is located on an dominant position. In addition, a new approach to improve the DoA estimation from a cognitive selection of microphones is introduced. In this technique, the microphones selected must form a consistent (sum of delays equal to zero) fundamental loop. The results obtained after processing muzzle blast gunshot signals recorded in a typical scenario, show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "b5372d4cad87aab69356ebd72aed0e0b",
"text": "Web content nowadays can also be accessed through new generation of Internet connected TVs. However, these products failed to change users’ behavior when consuming online content. Users still prefer personal computers to access Web content. Certainly, most of the online content is still designed to be accessed by personal computers or mobile devices. In order to overcome the usability problem of Web content consumption on TVs, this paper presents a knowledge graph based video generation system that automatically converts textual Web content into videos using semantic Web and computer graphics based technologies. As a use case, Wikipedia articles are automatically converted into videos. The effectiveness of the proposed system is validated empirically via opinion surveys. Fifty percent of survey users indicated that they found generated videos enjoyable and 42 % of them indicated that they would like to use our system to consume Web content on their TVs.",
"title": ""
},
{
"docid": "4338819a7ff4753c37f209ec0ba010ba",
"text": "Hydraulic and pneumatic actuators are used as actuators of robots. They have large capabilities of instantaneous output, but with problems of increase in size and mass, and difficulty for precise control. In contrast, electromagnetic motors have better controllability, lower cost, and smaller size. However, in order to actuate robots, they are usually used with reducers which have high reduction ratio, and it is difficult to realize creature-like dynamic motions such as fast running and high jumping, due to low backdrivability of joints. To solve the problem, we have developed leg mechanisms, which consist of a spring and a damper inspired by bi-articular muscle-tendon complex of animals. The final target is to develop a quadruped robot which can walk, run fast and jump highly like a cat. A cat mainly uses its hind legs in jumping and front legs in landing. It implies that the hind legs play an important role in jumping, and that the front legs do in landing. For this reason, it is necessary to design different leg structures for front and hind legs. In this paper, we develop a new front leg mechanism suitable to a hind leg mechanism which was already made by our group, and make a small quadruped robot. As the result of experiments for dynamic motions, stable running trot at a speed of 3.5 kilometers per hour and forward jumping of 1 body length per jump have been realized by the robot.",
"title": ""
},
{
"docid": "9a973833c640e8a9fe77cd7afdae60f2",
"text": "Metastasis is a characteristic trait of most tumour types and the cause for the majority of cancer deaths. Many tumour types, including melanoma and breast and prostate cancers, first metastasize via lymphatic vessels to their regional lymph nodes. Although the connection between lymph node metastases and shorter survival times of patients was made decades ago, the active involvement of the lymphatic system in cancer, metastasis has been unravelled only recently, after molecular markers of lymphatic vessels were identified. A growing body of evidence indicates that tumour-induced lymphangiogenesis is a predictive indicator of metastasis to lymph nodes and might also be a target for prevention of metastasis. This article reviews the current understanding of lymphangiogenesis in cancer anti-lymphangiogenic strategies for prevention and therapy of metastatic disease, quantification of lymphangiogenesis for the prognosis and diagnosis of metastasis and in vivo imaging technologies for the assessment of lymphatic vessels, drainage and lymph nodes.",
"title": ""
},
{
"docid": "9f93290d5a344896954875cdc350c0d5",
"text": "PURPOSE\nTo evaluate the diagnostic performance of (68)Ga-DOTATATE (18)F-fluorodeoxyglucose ((18)F-FDG) positron emission tomography (PET)/computed tomography (CT), (18)F-FDG PET/CT and (131)I-MIBG scintigraphy in the mapping of metastatic pheochromocytoma and paraganglioma.\n\n\nMATERIALS AND METHODS\nSeventeen patients (male = 8, female = 9; age range, 13-68 years) with clinically proven or suspicious metastatic pheochromocytoma or paraganglioma were included in this prospective study. Twelve patients underwent all three modalities, whereas five patients underwent (68)Ga-DOTATATE and (131)I-MIBG without (18)F-FDG. A composite reference standard derived from anatomical and functional imaging findings, along with histopathological information, was used to validate the findings. Results were analysed on a per-patient and on per-lesion basis. Sensitivity and accuracy were assessed using McNemar's test.\n\n\nRESULTS\nOn a per-patient basis, 14/17 patients were detected in (68)Ga-DOTATATE, 7/17 patients in (131)I-MIBG, and 10/12 patients in (18)F-FDG. The sensitivity and accuracy of (68)Ga-DOTATATE, (131)I-MIBG and (18)F-FDG were (93.3 %, 94.1 %), (46.7 %, 52.9 %) and (90.9 %, 91.7 %) respectively. On a per-lesion basis, an overall of 472 positive lesions were detected; of which 432/472 were identified by (68)Ga-DOTATATE, 74/472 by (131)I-MIBG, and 154/300 (patient, n = 12) by (18)F-FDG. The sensitivity and accuracy of (68)Ga-DOTATATE, (131)I-MIBG and (18)F-FDG were (91.5 %, 92.6 % p < 0.0001), (15.7 %, 26.0 % p < 0.0001) and (51.3 %, 57.8 % p < 0.0001) respectively. Discordant lesions were demonstrated on (68)Ga-DOTATATE, (131)I-MIBG and (18)F-FDG.\n\n\nCONCLUSIONS\nGa-DOTATATE PET/CT shows high diagnostic accuracy than (131)I-MIBG scintigraphy and (18)F-FDG PET/ CT in mapping metastatic pheochromocytoma and paraganglioma.",
"title": ""
},
{
"docid": "032f5b66ae4ede7e26a911c9d4885b98",
"text": "Are trust and risk important in consumers' electronic commerce purchasing decisions? What are the antecedents of trust and risk in this context? How do trust and risk affect an Internet consumer's purchasing decision? To answer these questions, we i) develop a theoretical framework describing the trust-based decision-making process a consumer uses when making a purchase from a given site, ii) test the proposed model using a Structural Equation Modeling technique on Internet consumer purchasing behavior data collected via a Web survey, and iii) consider the implications of the model. The results of the study show that Internet consumers' trust and perceived risk have strong impacts on their purchasing decisions. Consumer disposition to trust, reputation, privacy concerns, security concerns, the information quality of the Website, and the company's reputation, have strong effects on Internet consumers' trust in the Website. Interestingly, the presence of a third-party seal did not strongly influence consumers' trust. © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "66569dbe07f85133c22bde1e59d3eaa6",
"text": "Understanding the normal operation of IP networks is a common step in building a solution for automatic detection of network anomalies. Toward this end, we analyze the usage of two different approaches: the AutoRegressive Integrated Moving Average (ARIMA) model and an improvement of the traditional Holt-winters method. We use both models for traffic characterization, called Digital Signature of Network Segment using Flow analysis (DSNSF), and volume anomaly or outliers detection. The DSNSFs obtained by the presented models are compared to the actual traffic of bits and packets of a real network environment and then subjected to specific evaluations in order to measure its accuracy. The presented models are capable of providing feedback through its predictive capabilities and hence provide an early warning system.",
"title": ""
},
{
"docid": "33c38bd7444164fb1539da573da3db25",
"text": "Axial endplay problems often occur in electrical machines even in the conventional skew motor. In order to solve these problems, an improved skew rotor is proposed to weaken the ill-effect of the conventional skew motor by skewing the slots toward reverse directions. The space distributions of magnetic flux field and the Maxwell stress tensor on the rotor surfaces are analyzed by an analytical method. The time-step finite-element 3-D whole model of a novel skew squirrel-cage induction motor is presented for verification. The results indicate that the radial and the axial forces decrease, but the rotary torque remains unchanged. The validity of the improved method is verified by means of the comparison with the conventional one.",
"title": ""
},
{
"docid": "18b173283a1eb58170982504bec7484f",
"text": "Database forensics is a domain that uses database content and metadata to reveal malicious activities on database systems in an Internet of Things environment. Although the concept of database forensics has been around for a while, the investigation of cybercrime activities and cyber breaches in an Internet of Things environment would benefit from the development of a common investigative standard that unifies the knowledge in the domain. Therefore, this paper proposes common database forensic investigation processes using a design science research approach. The proposed process comprises four phases, namely: 1) identification; 2) artefact collection; 3) artefact analysis; and 4) the documentation and presentation process. It allows the reconciliation of the concepts and terminologies of all common database forensic investigation processes; hence, it facilitates the sharing of knowledge on database forensic investigation among domain newcomers, users, and practitioners.",
"title": ""
},
{
"docid": "fba77f5a86036c64a88cd4d93b42dbfb",
"text": "A deep generative model is developed for representation and analysis of images, based on a hierarchical convolutional dictionary-learning framework. Stochastic unpooling is employed to link consecutive layers in the model, yielding top-down image generation. A Bayesian support vector machine is linked to the toplayer features, yielding max-margin discrimination. Deep deconvolutional inference is employed when testing, to infer the latent features, and the top-layer features are connected with the max-margin classifier for discrimination tasks. The model is efficiently trained using a Monte Carlo expectation-maximization (MCEM) algorithm; the algorithm is implemented on graphical processor units (GPU) to enable large-scale learning, and fast testing. Excellent results are obtained on several benchmark datasets, including ImageNet, demonstrating that the proposed model achieves results that are highly competitive with similarly sized convolutional neural networks.",
"title": ""
},
{
"docid": "a604527951768b088fe2e40104fa78bb",
"text": "In this study, the Multi-Layer Perceptron (MLP)with Back-Propagation learning algorithm are used to classify to effective diagnosis Parkinsons disease(PD).It’s a challenging problem for medical community.Typically characterized by tremor, PD occurs due to the loss of dopamine in the brains thalamic region that results in involuntary or oscillatory movement in the body. A feature selection algorithm along with biomedical test values to diagnose Parkinson disease.Clinical diagnosis is done mostly by doctor’s expertise and experience.But still cases are reported of wrong diagnosis and treatment.Patients are asked to take number of tests for diagnosis.In many cases,not all the tests contribute towards effective diagnosis of a disease.Our work is to classify the presence of Parkinson disease with reduced number of attributes.Original,22 attributes are involved in classify.We use Information Gain to determine the attributes which reduced the number of attributes which is need to be taken from patients.The Artificial neural networks is used to classify the diagnosis of patients.Twenty-Two attributes are reduced to sixteen attributes.The accuracy is in training data set is 82.051% and in the validation data set is 83.333%. Keywords—Data mining , classification , Parkinson disease , Artificial neural networks , Feature Selection , Information Gain",
"title": ""
},
{
"docid": "53d7816b9db8dd5b1d2f2fc2ebaebcf5",
"text": "Estimates suggest that up to 90% or more youth between 12 and 18 years have access to the Internet. Concern has been raised that this increased accessibility may lead to a rise in pornography seeking among children and adolescents, with potentially serious ramifications for child and adolescent sexual development. Using data from the Youth Internet Safety Survey, a nationally representative, cross-sectional telephone survey of 1501 children and adolescents (ages 10-17 years), characteristics associated with self-reported pornography seeking behavior, both on the Internet and using traditional methods (e.g., magazines), are identified. Seekers of pornography, both online and offline, are significantly more likely to be male, with only 5% of self-identified seekers being female. The vast majority (87%) of youth who report looking for sexual images online are 14 years of age or older, when it is developmentally appropriate to be sexually curious. Children under the age of 14 who have intentionally looked at pornography are more likely to report traditional exposures, such as magazines or movies. Concerns about a large group of young children exposing themselves to pornography on the Internet may be overstated. Those who report intentional exposure to pornography, irrespective of source, are significantly more likely to cross-sectionally report delinquent behavior and substance use in the previous year. Further, online seekers versus offline seekers are more likely to report clinical features associated with depression and lower levels of emotional bonding with their caregiver. Results of the current investigation raise important questions for further inquiry. Findings from these cross-sectional data provide justification for longitudinal studies aimed at parsing out temporal sequencing of psychosocial experiences.",
"title": ""
},
{
"docid": "e4319431eb83ed67ba03b66957de6f9e",
"text": "An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well. This paper gives overview of Artificial Neural Network, working & training of ANN. It also explain the application and advantages of ANN.",
"title": ""
},
{
"docid": "8582ed674eb81192fe51c5fccd9c1b35",
"text": "Low power (5 W-25 W) automotive DC-DC converters have a very wide input voltage range from 4.5V to 42V, and it is usually operated at high switching frequency in order to comply with the strict CISPR-25 Standard for EMI performance. The conventional buck converter is currently employed for such applications owing to its simplicity and low cost, but it has low efficiency at high switching frequencies and high EMI emission due to hard switching. To solve these issues, an active-clamp buck converter is proposed in this paper, which features zero-voltage-switching (ZVS) and hence high efficiency and low EMI emission over a wide input voltage range, making it suitable for low power automotive applications. The operating principles including ZVS mechanism, detailed design considerations and experimental results from a 1 MHz prototype are presented.",
"title": ""
},
{
"docid": "c71635ec5c0ef83c850cab138330f727",
"text": "Academic institutions are now drawing attention in finding methods for making effective learning process, for identifying learner’s achievements and weakness, for tracing academic progress and also for predicting future performance. People’s increased expectation for accountability and transparency makes it necessary to implement big data analytics in the educational institution. But not all the educationalist and administrators are ready to take the challenge. So, it is now obvious to know about the necessity and opportunity as well as challenges of implementing big data analytics. This paper will describe the needs, opportunities and challenges of implementing big data analytics in the education sector.",
"title": ""
},
{
"docid": "836c51ed5c9ef5e432498684996f4eb5",
"text": "This paper presents a system that compositionally maps outputs of a wide-coverage Japanese CCG parser onto semantic representations and performs automated inference in higher-order logic. The system is evaluated on a textual entailment dataset. It is shown that the system solves inference problems that focus on a variety of complex linguistic phenomena, including those that are difficult to represent in the standard first-order logic.",
"title": ""
},
{
"docid": "021789cea259697f236986028218e3f6",
"text": "In the IT world of corporate networking, how businesses store and compute data is starting to shift from in-house servers to the cloud. However, some enterprises are still hesitant to make this leap to the cloud because of their information security and data privacy concerns. Enterprises that want to invest into this service need to feel confident that the information stored on the cloud is secure. Due to this need for confidence, trust is one of the major qualities that cloud service providers (CSPs) must build for cloud service users (CSUs). To do this, a model that all CSPs can follow must exist to establish a trust standard in the industry. If no concrete model exists, the future of cloud computing will be stagnant. This paper presents a new trust model that involves all the cloud stakeholders such as CSU, CSP, and third-party auditors. Our proposed trust model is objective since it involves third-party auditors to develop unbiased trust between the CSUs and the CSPs. Furthermore, to support the implementation of the proposed trust model, we rank CSPs according to the trust-values obtained from the trust model. The final score for each participating CSP will be determined based on the third-party assessment and the feedback received from the CSUs.",
"title": ""
},
{
"docid": "389538174613c07818361d014deecd22",
"text": "High range-resolution monopulse (HRRM) tracking radar which maintains wide instantaneous bandwidth through both range and angle error sensing channels provides range, azimuth, elevation, and amplitude for each resolved part of the target. The three-dimensional target detail can be used to improve and extend radar performance in several ways: for improved precision of target location, for target classification and recognition, to counter repeater-type ECM, to improve low-angle multipath tracking, to resolve multiple targets, as a miss-distance measurement capability, and for improved tracking in chaff and clutter. These have been demonstrated qualitatively except for the ECCM to repeater ECM and low-altitude tracking improvement. Initial results from an experimental HRRM radar with 3-ns pulse length show resolution of aircraft into its major parts and precise location of each resolved part accurately in range and angle. Realtime closed-loop tracking is performed on aircraft in flight using high-speed sampled, digitized, and processed HRRM range and angle video data. Clutter rejection capability is also demonstrated.",
"title": ""
},
{
"docid": "e7adf9c63fd7a3814b0c565c3a4c14a3",
"text": "A capsule is a group of neurons whose outputs represent different properties of the same entity. Each layer in a capsule network contains many capsules. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer (the pose). A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships. Each of these votes is weighted by an assignment coefficient. These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes. The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers. On the smallNORB benchmark, capsules reduce the number of test errors by 45% compared to the state-of-the-art. Capsules also show far more resistance to white box adversarial attacks than our baseline convolutional neural network.",
"title": ""
}
] |
scidocsrr
|
d000772a5efdb3234e2dfd38c11e903b
|
Contracting, equal, and expanding learning schedules: the optimal distribution of learning sessions depends on retention interval.
|
[
{
"docid": "3ade96c73db1f06d7e0c1f48a0b33387",
"text": "To achieve enduring retention, people must usually study information on multiple occasions. How does the timing of study events affect retention? Prior research has examined this issue only in a spotty fashion, usually with very short time intervals. In a study aimed at characterizing spacing effects over significant durations, more than 1,350 individuals were taught a set of facts and--after a gap of up to 3.5 months--given a review. A final test was administered at a further delay of up to 1 year. At any given test delay, an increase in the interstudy gap at first increased, and then gradually reduced, final test performance. The optimal gap increased as test delay increased. However, when measured as a proportion of test delay, the optimal gap declined from about 20 to 40% of a 1-week test delay to about 5 to 10% of a 1-year test delay. The interaction of gap and test delay implies that many educational practices are highly inefficient.",
"title": ""
},
{
"docid": "ab05a100cfdb072f65f7dad85b4c5aea",
"text": "Expanding retrieval practice refers to the idea that gradually increasing the spacing interval between repeated tests ought to promote optimal long-term retention. Belief in the superiority of this technique is widespread, but empirical support is scarce. In addition, virtually all research on expanding retrieval has examined the learning of word pairs in paired-associate tasks. We report two experiments in which we examined the learning of text materials with expanding and equally spaced retrieval practice schedules. Subjects studied brief texts and recalled them in an initial learning phase. We manipulated the spacing of the repeated recall tests and examined final recall 1 week later. Overall we found that (1) repeated testing enhanced retention more than did taking a single test, (2) testing with feedback (restudying the passages) produced better retention than testing without feedback, but most importantly (3) there were no differences between expanding and equally spaced schedules of retrieval practice. Repeated retrieval enhanced long-term retention, but how the repeated tests were spaced did not matter.",
"title": ""
},
{
"docid": "04a4996eb5be0d321037cac5cb3c1ad6",
"text": "Repeated retrieval enhances long-term retention, and spaced repetition also enhances retention. A question with practical and theoretical significance is whether there are particular schedules of spaced retrieval (e.g., gradually expanding the interval between tests) that produce the best learning. In the present experiment, subjects studied and were tested on items until they could recall each one. They then practiced recalling the items on 3 repeated tests that were distributed according to one of several spacing schedules. Increasing the absolute (total) spacing of repeated tests produced large effects on long-term retention: Repeated retrieval with long intervals between each test produced a 200% improvement in long-term retention relative to repeated retrieval with no spacing between tests. However, there was no evidence that a particular relative spacing schedule (expanding, equal, or contracting) was inherently superior to another. Although expanding schedules afforded a pattern of increasing retrieval difficulty across repeated tests, this did not translate into gains in long-term retention. Repeated spaced retrieval had powerful effects on retention, but the relative schedule of repeated tests had no discernible impact.",
"title": ""
}
] |
[
{
"docid": "f8c7f0fc1fb365d874766f6d1da2215c",
"text": "Different works have shown that the combination of multiple loss functions is beneficial when training deep neural networks for a variety of prediction tasks. Generally, such multi-loss approaches are implemented via a weighted multi-loss objective function in which each term encodes a different desired inference criterion. The importance of each term is often set using empirically tuned hyper-parameters. In this work, we analyze the importance of the relative weighting between the different terms of a multi-loss function and propose to leverage the model’s uncertainty with respect to each loss as an automatically learned weighting parameter. We consider the application of colon gland analysis from histopathology images for which various multi-loss functions have been proposed. We show improvements in classification and segmentation accuracy when using the proposed uncertainty driven multi-loss function.",
"title": ""
},
{
"docid": "c87b75a335df334c3ae8eb38b7a872cf",
"text": "Image quality is important not only for the viewing experience, but also for the performance of image processing algorithms. Image quality assessment (IQA) has been a topic of intense research in the fields of image processing and computer vision. In this paper, we first analyze the factors that affect two-dimensional (2D) and three-dimensional (3D) image quality, and then provide an up-to-date overview on IQA for each main factor. The main factors that affect 2D image quality are fidelity and aesthetics. Another main factor that affects stereoscopic 3D image quality is visual comfort. We also describe the IQA databases and give the experimental results on representative IQA metrics. Finally, we discuss the challenges for IQA, including the influence of different factors on each other, the performance of IQA metrics in real applications, and the combination of quality assessment, restoration, and enhancement.",
"title": ""
},
{
"docid": "261930bf1b06e5c1e8cc47598e7e8a30",
"text": "Psychological First Aid (PFA) is the recommended immediate psychosocial response during crises. As PFA is now widely implemented in crises worldwide, there are increasing calls to evaluate its effectiveness. World Vision used PFA as a fundamental component of their emergency response following the 2014 conflict in Gaza. Anecdotal reports from Gaza suggest a range of benefits for those who received PFA. Though not intending to undertake rigorous research, World Vision explored learnings about PFA in Gaza through Focus Group Discussions with PFA providers, Gazan women, men and children and a Key Informant Interview with a PFA trainer. The qualitative analyses aimed to determine if PFA helped individuals to feel safe, calm, connected to social supports, hopeful and efficacious - factors suggested by the disaster literature to promote coping and recovery (Hobfoll et al., 2007). Results show positive psychosocial benefits for children, women and men receiving PFA, confirming that PFA contributed to: safety, reduced distress, ability to engage in calming practices and to support each other, and a greater sense of control and hopefulness irrespective of their adverse circumstances. The data shows that PFA formed an important part of a continuum of care to meet psychosocial needs in Gaza and served as a gateway for addressing additional psychosocial support needs. A \"whole-of-family\" approach to PFA showed particularly strong impacts and strengthened relationships. Of note, the findings from World Vision's implementation of PFA in Gaza suggests that future PFA research go beyond a narrow focus on clinical outcomes, to a wider examination of psychosocial, familial and community-based outcomes.",
"title": ""
},
{
"docid": "ab47d6b0ae971a5cf0a24f1934fbee63",
"text": "Deep representations, in particular ones implemented by convolutional neural networks, have led to good progress on many learning problems. However, the learned representations are hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study deep image representations by inverting them with an up-convolutional neural network. Application of this method to a deep network trained on ImageNet provides numerous insights into the properties of the feature representation. Most strikingly, the colors and the rough contours of an input image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.",
"title": ""
},
{
"docid": "cdced5f45620aa620cde9a937692a823",
"text": "Due to a rapid advancement in the electronic commerce technology, the use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper, we model the sequence of operations in credit card transaction processing using a hidden Markov model (HMM) and show how it can be used for the detection of frauds. An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected. We present detailed experimental results to show the effectiveness of our approach and compare it with other techniques available in the literature.",
"title": ""
},
{
"docid": "baa0bf8fe429c4fe8bfb7ebf78a1ed94",
"text": "The weakly supervised object localization (WSOL) is to locate the objects in an image while only image-level labels are available during the training procedure. In this work, the Selective Feature Category Mapping (SFCM) method is proposed, which introduces the Feature Category Mapping (FCM) and the widely-used selective search method to solve the WSOL task. Our FCM replaces layers after the specific layer in the state-of-the-art CNNs with a set of kernels and learns the weighted pooling for previous feature maps. It is trained with only image-level labels and then map the feature maps to their corresponding categories in the test phase. Together with selective search method, the location of each object is finally obtained. Extensive experimental evaluation on ILSVRC2012 and PASCAL VOC2007 benchmarks shows that SFCM is simple but very effective, and it is able to achieve outstanding classification performance and outperform the state-of-the-art methods in the WSOL task.",
"title": ""
},
{
"docid": "aaab50242d8d40e62491956773fa0cfb",
"text": "Grammatical Evolution (GE) is a population-based evolutionary algorithm, where a formal grammar is used in the genotype to phenotype mapping process. PonyGE2 is an open source implementation of GE in Python, developed at UCD's Natural Computing Research and Applications group. It is intended as an advertisement and a starting-point for those new to GE, a reference for students and researchers, a rapid-prototyping medium for our own experiments, and a Python workout. As well as providing the characteristic genotype to phenotype mapping of GE, a search algorithm engine is also provided. A number of sample problems and tutorials on how to use and adapt PonyGE2 have been developed.",
"title": ""
},
{
"docid": "5e86e48f73283ac321abee7a9f084bec",
"text": "Recent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks. One appealing property of such systems is their generality, as excellent performance can be achieved with a unified architecture and without task-specific feature engineering. However, it is unclear if such systems can be used for tasks without large amounts of training data. In this paper we explore the problem of transfer learning for neural sequence taggers, where a source task with plentiful annotations (e.g., POS tagging on Penn Treebank) is used to improve performance on a target task with fewer available annotations (e.g., POS tagging for microblogs). We examine the effects of transfer learning for deep hierarchical recurrent networks across domains, applications, and languages, and show that significant improvement can often be obtained. These improvements lead to improvements over the current state-ofthe-art on several well-studied tasks.1",
"title": ""
},
{
"docid": "9bc90b182e3acd0fd0cfa10a7abc32f8",
"text": "The advertising industry is seeking to use the unique data provided by the increasing usage of mobile devices and mobile applications (apps) to improve targeting and the experience with apps. As a consequence, understanding user behaviours with apps has gained increased interests from both academia and industry. In this paper we study user app engagement patterns and disruptions of those patterns in a data set unique in its scale and coverage of user activity. First, we provide a detailed account of temporal user activity patterns with apps and compare these to previous studies on app usage behavior. Then, in the second part, and the main contribution of this work, we take advantage of the scale and coverage of our sample and show how app usage behavior is disrupted through major political, social, and sports events.",
"title": ""
},
{
"docid": "0a4f5a46948310cfce44a8749cd479df",
"text": "This paper presents a tutorial introduction to contemporary cryptography. The basic information theoretic and computational properties of classical and modern cryptographic systems are presented, followed by cryptanalytic examination of several important systems and an examination of the application of cryptography to the security of timesharing systems and computer networks. The paper concludes with a guide to the cryptographic literature.",
"title": ""
},
{
"docid": "2472a20493c3319cdc87057cc3d70278",
"text": "Traffic flow prediction is an essential function of traffic information systems. Conventional approaches, using artificial neural networks with narrow network architecture and poor training samples for supervised learning, have been only partially successful. In this paper, a deep-learning neural-network based on TensorFlow™ is suggested for the prediction traffic flow conditions, using real-time traffic data. Until now, no research has applied the TensorFlow™ deep learning neural network model to the estimation of traffic conditions. The suggested supervised model is trained by a deep learning algorithm, which uses real traffic data aggregated every five minutes. Results demonstrate that the model's accuracy rate is around 99%.",
"title": ""
},
{
"docid": "a7c661ce625c60ef7a1ff498795b9020",
"text": "Median filtering technique is often used to remove additive white, salt and pepper noise from a signal or a source image. This filtering method is essential for the processing of digital data representing analog signals in real time. The median filter considers each pixel in the image in turn and looks at its nearby neighbors to determine whether or not it is representative of its surroundings. It replaces the pixel value with the median of neighboring pixel values. The median is calculated by first sorting all the pixel values from the surrounding neighborhood into numerical order and then replacing the pixel being considered with the middle pixel value. We have used graphics processing units (GPUs) to implement the post-processing, performed by NVIDIA Compute Unified Device Architecture (CUDA). Such a system is faster than the CPU version, or other traditional computing, for processing medical applications such as echography or Doppler. This paper shows the effect of the Median Filtering and a comparison of the performance of the CPU and GPU in terms of response time.",
"title": ""
},
{
"docid": "971a0e51042e949214fd75ab6203e36a",
"text": "This paper presents an automatic recognition method for color text characters extracted from scene images, which is robust to strong distortions, complex background, low resolution and non uniform lightning. Based on a specific architecture of convolutional neural networks, the proposed system automatically learns how to recognize characters without making any assumptions, without applying any preprocessing or post-processing and without using tunable parameters. For this purpose, we use a training set of scene text images extracted from the ICDAR 2003 public training database. The proposed method is compared to recent character recognition techniques for scene images based on the ICDAR 2003 public samples dataset in order to contribute to the state-of-the-art method comparison efforts initiated in ICDAR 2003. Experimental results show an encouraging average recognition rate of 84.53%, ranging from 93.47% for clear images to 67.86% for seriously distorted images.",
"title": ""
},
{
"docid": "cab1abfa4e945b3892fa19f3fa030992",
"text": "Catheter-associated urinary tract infections (UTIs) are a significant negative outcome. There are previous studies showing advantages in removing Foleys early but no studies of the effect of using intermittent as opposed to Foley catheterization in a trauma population. This study evaluates the effectiveness of a straight catheter protocol implemented in February 2015. A retrospective chart review was performed on all patients admitted to the trauma service at a single institution who had a UTI one year before and one year after protocol implementation on February 18, 2015. The protocol involved removing Foley catheters early and using straight catheterization. Rates were compared with Fisher's exact test and continuous data were compared using student's t test. There were 1477 patients admitted to the trauma service in the control year and 1707 in the study year. The control year had a total of 43 patients with a UTI, 28 of these met inclusion criteria. The intervention year had a total of 35 patients with a UTI and 17 met inclusion criteria. The rate of patients having a UTI went from 0.019 to 0.010 (p = 0.035). In females this rate went from 0.033 to 0.009 (p = 0.007), whereas in males it went from 0.012 to 0.010 (p = 0.837). This study shows a statistically significant improvement in the rate of UTIs after implementing an intermittent catheterization protocol suggesting that this protocol could improve the rate of UTIs in other trauma centers. We use this for all trauma patients, and it is being looked at for use hospital-wide.",
"title": ""
},
{
"docid": "52cc3f8cd0609b1ceaa7bb9b01643c8d",
"text": "A 24-GHz portable FMCW radar for short-range human tracking is designed, fabricated, and tested. The complete radar system weights 17.3 grams and has a dimension of 65mm×60mm×25mm. It has an on-board chirp generator, which generates a 45.7 Hz sawtooth signal to control the VCO. A 1.8GHz bandwidth ranging from 22.8 GHz to 24.6 GHz is transmitted. A pair of Vivaldi antennas with a bandwidth of 3.8 GHz, ranging from 22.5 GHz to 26.3 GHz, are implemented on the same board with the RF transceiver. A six-port structure is employed to down-convert the RF signal to baseband. Measurement result has validated its promising ability to for short-range human tracking.",
"title": ""
},
{
"docid": "ef84f7f53b60cf38972ff1eb04d0f6a5",
"text": "OBJECTIVE\nThe purpose of this prospective study was to evaluate the efficacy and safety of screw fixation without bone fusion for unstable thoracolumbar and lumbar burst fracture.\n\n\nMETHODS\nNine patients younger than 40 years underwent screw fixation without bone fusion, following postural reduction using a soft roll at the involved vertebra, in cases of burst fracture. Their motor power was intact in spite of severe canal compromise. The surgical procedure included postural reduction for 3 days and screw fixations at one level above, one level below and at the fractured level itself. The patients underwent removal of implants 12 months after the initial operation, due to possibility of implant failure. Imaging and clinical findings, including canal encroachment, vertebral height, clinical outcome, and complications were analyzed.\n\n\nRESULTS\nPrior to surgery, the mean pain score (visual analogue scale) was 8.2, which decreased to 2.2 at 12 months after screw fixation. None of the patients complained of worsening of pain during 6 months after implant removal. All patients were graded as having excellent or good outcomes at 6 months after implant removal. The proportion of canal compromise at the fractured level improved from 55% to 35% at 12 months after surgery. The mean preoperative vertebral height loss was 45.3%, which improved to 20.6% at 6 months after implant removal. There were no neurological deficits related to neural injury. The improved vertebral height and canal compromise were maintained at 6 months after implant removal.\n\n\nCONCLUSION\nShort segment pedicle screw fixation, including fractured level itself, without bone fusion following postural reduction can be an effective and safe operative technique in the management of selected young patients suffering from unstable burst fracture.",
"title": ""
},
{
"docid": "6a0f60881dddc5624787261e0470b571",
"text": "Title of Dissertation: AUTOMATED STRUCTURAL AND SPATIAL COMPREHENSION OF DATA TABLES Marco David Adelfio, Doctor of Philosophy, 2015 Dissertation directed by: Professor Hanan Samet Department of Computer Science Data tables on the Web hold large quantities of information, but are difficult to search, browse, and merge using existing systems. This dissertation presents a collection of techniques for extracting, processing, and querying tables that contain geographic data, by harnessing the coherence of table structures for retrieval tasks. Data tables, including spreadsheets, HTML tables, and those found in rich document formats, are the standard way of communicating structured data for typical computer users. Notably, geographic tables (i.e., those containing names of locations) constitute a large fraction of publicly-available data tables and are ripe for exposure to Internet users who are increasingly comfortable interacting with geographic data using web-based maps. Of particular interest is the creation of a large repository of geographic data tables that would enable novel queries such as “find vacation itineraries geographically similar to mine” for use in trip planning or “find demographic datasets that cover regions X, Y, and Z” for sociological research. In support of these goals, this dissertation identifies several methods for using the structure and context of data tables to improve the interpretation of the contents, even in the presence of ambiguity. First, a method for identifying functional components of data tables is presented, capitalizing on techniques for sequence labeling that are used in natural language processing. Next, a novel automated method for converting place references to physical latitude/longitude values, a process known as geotagging, is applied to tables with high accuracy. A classification procedure for identifying a specific class of geographic table, the travel itinerary, is also described, which borrows inspiration from optimization techniques for the traveling salesman problem (TSP). Finally, methods for querying spatially similar tables are introduced and several mechanisms for visualizing and interacting with the extracted geographic data are explored. AUTOMATED STRUCTURAL AND SPATIAL COMPREHENSION OF DATA TABLES",
"title": ""
},
{
"docid": "18ad179d4817cb391ac332dcbfe13788",
"text": "Many papers have been published on the knowledge base completion task in the past few years. Most of these introduce novel architectures for relation learning that are evaluated on standard datasets such as FB15k and WN18. This paper shows that the accuracy of almost all models published on the FB15k can be outperformed by an appropriately tuned baseline — our reimplementation of the DistMult model. Our findings cast doubt on the claim that the performance improvements of recent models are due to architectural changes as opposed to hyperparameter tuning or different training objectives. This should prompt future research to re-consider how the performance of models is evaluated and reported.",
"title": ""
},
{
"docid": "c71cfc228764fc96e7e747e119445939",
"text": "This review discusses and summarizes the recent developments and advances in the use of biodegradable materials for bone repair purposes. The choice between using degradable and non-degradable devices for orthopedic and maxillofacial applications must be carefully weighed. Traditional biodegradable devices for osteosynthesis have been successful in low or mild load bearing applications. However, continuing research and recent developments in the field of material science has resulted in development of biomaterials with improved strength and mechanical properties. For this purpose, biodegradable materials, including polymers, ceramics and magnesium alloys have attracted much attention for osteologic repair and applications. The next generation of biodegradable materials would benefit from recent knowledge gained regarding cell material interactions, with better control of interfacing between the material and the surrounding bone tissue. The next generations of biodegradable materials for bone repair and regeneration applications require better control of interfacing between the material and the surrounding bone tissue. Also, the mechanical properties and degradation/resorption profiles of these materials require further improvement to broaden their use and achieve better clinical results.",
"title": ""
},
{
"docid": "83ccee768c29428ea8a575b2e6faab7d",
"text": "Audio-based cough detection has become more pervasive in recent years because of its utility in evaluating treatments and the potential to impact the quality of life for individuals with chronic cough. We critically examine the current state of the art in cough detection, concluding that existing approaches expose private audio recordings of users and bystanders. We present a novel algorithm for detecting coughs from the audio stream of a mobile phone. Our system allows cough sounds to be reconstructed from the feature set, but prevents speech from being reconstructed intelligibly. We evaluate our algorithm on data collected in the wild and report an average true positive rate of 92% and false positive rate of 0.5%. We also present the results of two psychoacoustic experiments which characterize the tradeoff between the fidelity of reconstructed cough sounds and the intelligibility of reconstructed speech.",
"title": ""
}
] |
scidocsrr
|
3b5ac890cbf75d7a1531f5dc46aa2c49
|
Automatic turn segmentation for Movie & TV subtitles
|
[
{
"docid": "4b408cc1c15e6099c16fe0a94923f86e",
"text": "Speaker diarization is the task of determining “who spoke when?” in an audio or video recording that contains an unknown amount of speech and also an unknown number of speakers. Initially, it was proposed as a research topic related to automatic speech recognition, where speaker diarization serves as an upstream processing step. Over recent years, however, speaker diarization has become an important key technology for many tasks, such as navigation, retrieval, or higher level inference on audio data. Accordingly, many important improvements in accuracy and robustness have been reported in journals and conferences in the area. The application domains, from broadcast news, to lectures and meetings, vary greatly and pose different problems, such as having access to multiple microphones and multimodal information or overlapping speech. The most recent review of existing technology dates back to 2006 and focuses on the broadcast news domain. In this paper, we review the current state-of-the-art, focusing on research developed since 2006 that relates predominantly to speaker diarization for conference meetings. Finally, we present an analysis of speaker diarization performance as reported through the NIST Rich Transcription evaluations on meeting data and identify important areas for future research.",
"title": ""
}
] |
[
{
"docid": "04d3d9ebbde32b70d2125a88896667ba",
"text": "We formulate and study distributed estimation algorithms based on diffusion protocols to implement cooperation among individual adaptive nodes. The individual nodes are equipped with local learning abilities. They derive local estimates for the parameter of interest and share information with their neighbors only, giving rise to peer-to-peer protocols. The resulting algorithm is distributed, cooperative and able to respond in real time to changes in the environment. It improves performance in terms of transient and steady-state mean-square error, as compared with traditional noncooperative schemes. Closed-form expressions that describe the network performance in terms of mean-square error quantities are derived, presenting a very good match with simulations.",
"title": ""
},
{
"docid": "ac29d60761976a263629a93167516fde",
"text": "Abstruct1-V power supply high-speed low-power digital circuit technology with 0.5-pm multithreshold-voltage CMOS (MTCMOS) is proposed. This technology features both lowthreshold voltage and high-threshold voltage MOSFET’s in a single LSI. The low-threshold voltage MOSFET’s enhance speed Performance at a low supply voltage of 1 V or less, while the high-threshold voltage MOSFET’s suppress the stand-by leakage current during the sleep period. This technology has brought about logic gate characteristics of a 1.7-11s propagation delay time and 0.3-pW/MHz/gate power dissipation with a standard load. In addition, an MTCMOS standard cell library has been developed so that conventional CAD tools can be used to lay out low-voltage LSI’s. To demonstrate MTCMOS’s effectiveness, a PLL LSI based on standard cells was designed as a carrying vehicle. 18-MHz operation at 1 V was achieved using a 0.5-pm CMOS process.",
"title": ""
},
{
"docid": "16995051681cebf1e2dba1484a3f85bf",
"text": "A core problem in learning semantic parsers from denotations is picking out consistent logical forms—those that yield the correct denotation—from a combinatorially large space. To control the search space, previous work relied on restricted set of rules, which limits expressivity. In this paper, we consider a much more expressive class of logical forms, and show how to use dynamic programming to efficiently represent the complete set of consistent logical forms. Expressivity also introduces many more spurious logical forms which are consistent with the correct denotation but do not represent the meaning of the utterance. To address this, we generate fictitious worlds and use crowdsourced denotations on these worlds to filter out spurious logical forms. On the WIKITABLEQUESTIONS dataset, we increase the coverage of answerable questions from 53.5% to 76%, and the additional crowdsourced supervision lets us rule out 92.1% of spurious logical forms.",
"title": ""
},
{
"docid": "690603bd37dd8376893fc1bb1946fc03",
"text": "Recently, the use of herbal medicines has been increased all over the world due to their therapeutic effects and fewer adverse effects as compared to the modern medicines. However, many herbal drugs and herbal extracts despite of their impressive in-vitro findings demonstrates less or negligible in-vivo activity due to their poor lipid solubility or improper molecular size, resulting in poor absorption and hence poor bioavailability. Nowadays with the advancement in the technology, novel drug delivery systems open the door towards the development of enhancing bioavailability of herbal drug delivery systems. For last one decade many novel carriers such as liposomes, microspheres, nanoparticles, transferosomes, ethosomes, lipid based systems etc. have been reported for successful modified delivery of various herbal drugs. Many herbal compounds including quercetin, genistein, naringin, sinomenine, piperine, glycyrrhizin and nitrile glycoside have demonstrated capability to enhance the bioavailability. The objective of this review is to summarize various available novel drug delivery technologies which have been developed for delivery of drugs (herbal), and to achieve better therapeutic response. An attempt has also been made to compile a profile on bioavailability enhancers of herbal origin with the mechanism of action (wherever reported) and studies on improvement in drug bioavailability, exhibited particularly by natural compounds.",
"title": ""
},
{
"docid": "8e66f052f71059827995d466dd60566d",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Techno-economic analysis of PV/H2 systems C Darras, G Bastien, M Muselli, P Poggi, B Champel, P Serre-Combe",
"title": ""
},
{
"docid": "4a7a4db8497b0d13c8411100dab1b207",
"text": "A novel and simple resolver-to-dc converter is presented. It is shown that by appropriate processing of the sine and cosine resolver signals, the proposed converter may produce an output voltage proportional to the shaft angle. A dedicated compensation method is applied to produce an almost perfectly linear output. This enables determination of the angle with reasonable accuracy without a processor and/or a look-up table. The tests carried out under various operating conditions are satisfactory and in good agreement with theory. This paper gives the theoretical analysis, the computer simulation, the full circuit details, and experimental results of the proposed scheme.",
"title": ""
},
{
"docid": "fe08f3e1dc4fe2d71059b483c8532e88",
"text": "Digital asset management (DAM) has increasing benefits in booming global Internet economy, but it is still a great challenge for providing an effective way to manage, store, ingest, organize and retrieve digital asset. To do it, we present a new digital asset management platform, called DAM-Chain, with Transaction-based Access Control (TBAC) which integrates the distribution ABAC model and the blockchain technology. In this platform, the ABAC provides flexible and diverse authorization mechanisms for digital asset escrowed into blockchain while the blockchain's transactions serve as verifiable and traceable medium of access request procedure. We also present four types of transactions to describe the TBAC access control procedure, and provide the algorithms of these transactions corresponding to subject registration, object escrowing and publication, access request and grant. By maximizing the strengths of both ABAC and blockchain, this platform can support flexible and diverse permission management, as well as verifiable and transparent access authorization process in an open decentralized environment.",
"title": ""
},
{
"docid": "5a71d766ecd60b8973b965e53ef8ddfd",
"text": "An m-polar fuzzy model is useful for multi-polar information, multi-agent, multi-attribute and multiobject network models which gives more precision, flexibility, and comparability to the system as compared to the classical, fuzzy and bipolar fuzzy models. In this paper, m-polar fuzzy sets are used to introduce the notion of m-polar psi-morphism on product m-polar fuzzy graph (mFG). The action of this morphism is studied and established some results on weak and co-weak isomorphism. d2-degree and total d2-degree of a vertex in product mFG are defined and studied their properties. A real life situation has been modeled as an application of product mFG. c ©2018 World Academic Press, UK. All rights reserved.",
"title": ""
},
{
"docid": "7825ace1376c7f7ab3ed98ee5fda11d1",
"text": "In this paper, Arabic was investigated from the speech recognition problem point of view. We propose a novel approach to build an Arabic automated speech recognition system using Arabic environment. The system, based on the open source CMU Sphinx-4, was trained using Arabic characters.",
"title": ""
},
{
"docid": "609651c6c87b634814a81f38d9bfbc67",
"text": "Resistance training (RT) has shown the most promise in reducing/reversing effects of sarcopenia, although the optimum regime specific for older adults remains unclear. We hypothesized myofiber hypertrophy resulting from frequent (3 days/wk, 16 wk) RT would be impaired in older (O; 60-75 yr; 12 women, 13 men), sarcopenic adults compared with young (Y; 20-35 yr; 11 women, 13 men) due to slowed repair/regeneration processes. Myofiber-type distribution and cross-sectional area (CSA) were determined at 0 and 16 wk. Transcript and protein levels of myogenic regulatory factors (MRFs) were assessed as markers of regeneration at 0 and 24 h postexercise, and after 16 wk. Only Y increased type I CSA 18% (P < 0.001). O showed smaller type IIa (-16%) and type IIx (-24%) myofibers before training (P < 0.05), with differences most notable in women. Both age groups increased type IIa (O, 16%; Y, 25%) and mean type II (O, 23%; Y, 32%) size (P < 0.05). Growth was generally most favorable in young men. Percent change scores on fiber size revealed an age x gender interaction for type I fibers (P < 0.05) as growth among Y (25%) exceeded that of O (4%) men. Myogenin and myogenic differentiation factor D (MyoD) mRNAs increased (P < 0.05) in Y and O, whereas myogenic factor (myf)-5 mRNA increased in Y only (P < 0.05). Myf-6 protein increased (P < 0.05) in both Y and O. The results generally support our hypothesis as 3 days/wk training led to more robust hypertrophy in Y vs. O, particularly among men. However, this differential hypertrophy adaptation was not explained by age variation in MRF expression.",
"title": ""
},
{
"docid": "7e61b5f63d325505209c3284c8a444a1",
"text": "A method to design low-pass filters (LPF) having a defected ground structure (DGS) and broadened transmission-line elements is proposed. The previously presented technique for obtaining a three-stage LPF using DGS by Lim et al. is generalized to propose a method that can be applied in design N-pole LPFs for N/spl les/5. As an example, a five-pole LPF having a DGS is designed and measured. Accurate curve-fitting results and the successive design process to determine the required size of the DGS corresponding to the LPF prototype elements are described. The proposed LPF having a DGS, called a DGS-LPF, includes transmission-line elements with very low impedance instead of open stubs in realizing the required shunt capacitance. Therefore, open stubs, teeor cross-junction elements, and high-impedance line sections are not required for the proposed LPF, while they all have been essential in conventional LPFs. Due to the widely broadened transmission-line elements, the size of the DGS-LPF is compact.",
"title": ""
},
{
"docid": "f89bfe549a1547b3d60a5e321d7bfbb7",
"text": "We describe a simple and eff icient network intrusion detection algorithm that detects novel attacks by flagging anomalous field values in packet headers at the data link, network, and transport layers. In the 1999 DARPA off-line intrusion detection evaluation test set (Lippmann et al. 2000), we detect 76% of probes and 48% of denial of service attacks (at 10 false alarms per day). When this system is merged with the 18 systems in the original evaluation, the average detection rate for attacks of all types increases from 61% to 65%. We investigate the effect on performance when attack free training data is not available.",
"title": ""
},
{
"docid": "9ce232e2a49652ee7fbfe24c6913d52a",
"text": "Anthropometric quantities are widely used in epidemiologic research as possible confounders, risk factors, or outcomes. 3D laser-based body scans (BS) allow evaluation of dozens of quantities in short time with minimal physical contact between observers and probands. The aim of this study was to compare BS with classical manual anthropometric (CA) assessments with respect to feasibility, reliability, and validity. We performed a study on 108 individuals with multiple measurements of BS and CA to estimate intra- and inter-rater reliabilities for both. We suggested BS equivalents of CA measurements and determined validity of BS considering CA the gold standard. Throughout the study, the overall concordance correlation coefficient (OCCC) was chosen as indicator of agreement. BS was slightly more time consuming but better accepted than CA. For CA, OCCCs for intra- and inter-rater reliability were greater than 0.8 for all nine quantities studied. For BS, 9 of 154 quantities showed reliabilities below 0.7. BS proxies for CA measurements showed good agreement (minimum OCCC > 0.77) after offset correction. Thigh length showed higher reliability in BS while upper arm length showed higher reliability in CA. Except for these issues, reliabilities of CA measurements and their BS equivalents were comparable.",
"title": ""
},
{
"docid": "5dfce8ea895d5349654b5e92cb485f8e",
"text": "Identifying current and future informal regions within cities remains a crucial issue for policymakers and governments in developing countries. The delineation process of identifying such regions in cities requires a lot of resources. While there are various studies that identify informal settlements based on satellite image classification, relying on both supervised or unsupervised machine learning approaches, these models either require multiple input data to function or need further development with regards to precision. In this paper, we introduce a novel method for identifying and predicting informal settlements using only street intersections data, regardless of the variation of urban form, number of floors, materials used for construction or street width. With such minimal input data, we attempt to provide planners and policy-makers with a pragmatic tool that can aid in identifying informal zones in cities. The algorithm of the model is based on spatial statistics and a machine learning approach, using Multinomial Logistic Regression (MNL) and Artificial Neural Networks (ANN). The proposed model relies on defining informal settlements based on two ubiquitous characteristics that these regions tend to be filled in with smaller subdivided lots of housing relative to the formal areas within the local context, and the paucity of services and infrastructure within the boundary of these settlements that require relatively bigger lots. We applied the model in five major cities in Egypt and India that have spatial structures in which informality is present. These cities are Greater Cairo, Alexandria, Hurghada and Minya in Egypt, and Mumbai in India. The predictSLUMS model shows high validity and accuracy for identifying and predicting informality within the same city the model was trained on or in different ones of a similar context.",
"title": ""
},
{
"docid": "0066d03bf551e64b9b4a1595f1494347",
"text": "Visual Text Analytics has been an active area of interdisciplinary research (http://textvis.lnu.se/). This interactive tutorial is designed to give attendees an introduction to the area of information visualization, with a focus on linguistic visualization. After an introduction to the basic principles of information visualization and visual analytics, this tutorial will give an overview of the broad spectrum of linguistic and text visualization techniques, as well as their application areas [3]. This will be followed by a hands-on session that will allow participants to design their own visualizations using tools (e.g., Tableau), libraries (e.g., d3.js), or applying sketching techniques [4]. Some sample datasets will be provided by the instructor. Besides general techniques, special access will be provided to use the VisArgue framework [1] for the analysis of selected datasets.",
"title": ""
},
{
"docid": "4df6678c57115f6179587cff1cc5f228",
"text": "Depth maps captured by Kinect-like cameras are lack of depth data in some areas and suffer from heavy noise. These defects have negative impacts on practical applications. In order to enhance the depth maps, this paper proposes a new inpainting algorithm that extends the original fast marching method (FMM) to reconstruct unknown regions. The extended FMM incorporates an aligned color image as the guidance for inpainting. An edge-preserving guided filter is further applied for noise reduction. To validate our algorithm and compare it with other existing methods, we perform experiments on both the Kinect data and the Middlebury dataset which, respectively, provide qualitative and quantitative results. The results show that our method is efficient and superior to others.",
"title": ""
},
{
"docid": "f5405c8fb7ad62d4277837bd7036b0d3",
"text": "Context awareness is one of the important fields in ubiquitous computing. Smart Home, a specific instance of ubiquitous computing, provides every family with opportunities to enjoy the power of hi-tech home living. Discovering that relationship among user, activity and context data in home environment is semantic, therefore, we apply ontology to model these relationships and then reason them as the semantic information. In this paper, we present the realization of smart home’s context-aware system based on ontology. We discuss the current challenges in realizing the ontology context base. These challenges can be listed as collecting context information from heterogeneous sources, such as devices, agents, sensors into ontology, ontology management, ontology querying, and the issue related to environment database explosion.",
"title": ""
},
{
"docid": "c2963a302e40488fd6094a6861d93cee",
"text": "We introduce a mathematical theory called market connectivity that gives concrete ways to both measure the efficiency of markets and find inefficiencies in large markets. The theory leads to new methods for testing the famous efficient markets hypothesis that do not suffer from the joint-hypothesis problem that has plagued past work. Our theory suggests metrics that can be used to compare the efficiency of one market with another, to find inefficiencies that may be profitable to exploit, and to evaluate the impact of policy and regulations on market efficiency. A market’s efficiency is tied to its ability to communicate information relevant to market participants. Market connectivity calculates the speed and reliability with which this communication is carried out via trade in the market. We model the market by a network called the trade network, which can be computed by recording transactions in the market over a fixed interval of time. The nodes of the network correspond to participants in the market. Every pair of nodes that trades in the market is connected by an edge that is weighted by the rate of trade, and associated with a vector that represents the type of item that is bought or sold. We evaluate the ability of the market to communicate by considering how it deals with shocks. A shock is a change in the beliefs of market participants about the value of the products that they trade. We compute the effect of every potential significant shock on trade in the market. We give mathematical definitions for a few concepts: • The tension and energy of the network are related concepts that measure the strength of the connections between sets of participants that trade similar items. They measure the amount of trade that is affected by significant shocks. They are high when there are many paths of high rate of trade that connect those with differing beliefs about the value of items. They are low when information from some large set of participants must take a long time to reach some other large set via trade. • A bottleneck in the network is a small set of nodes that monopolizes an unusually large share of the trade in the network. The nodes in the bottleneck have an incentive to set prices incorrectly and interfere with the fair transmission of information in the market. We give explicit mathematical definitions that capture these concepts and allow for quantitative measurements of market inefficiency. 1 ar X iv :1 70 2. 03 29 0v 1 [ qfi n. E C ] 9 F eb 2 01 7",
"title": ""
},
{
"docid": "4b156066e72d0e8bf220c3e13738d91c",
"text": "We present an unsupervised approach for abnormal event detection in videos. We propose, given a dictionary of features learned from local spatiotemporal cuboids using the sparse coding objective, the abnormality of an event depends jointly on two factors: the frequency of each feature in reconstructing all events (or, rarity of a feature) and the strength by which it is used in reconstructing the current event (or, the absolute coefficient). The Incremental Coding Length (ICL) of a feature is a measure of its entropy gain. Given a dictionary, the ICL computation does not involve any parameter, is computationally efficient and has been used for saliency detection in images with impressive results. In this paper, the rarity of a dictionary feature is learned online as its average energy, a function of its ICL. The proposed approach is applicable to real world streaming videos. Experiments on three benchmark datasets and evaluations in comparison with a number of mainstream algorithms show that the approach is comparable to the state-of-the-art.",
"title": ""
},
{
"docid": "50c493ce0ac1f60889fb2a4b490fc939",
"text": "Future cellular networks will be of high capacity and heterogeneity. The structure and architecture will require high efficiency and scalability in network operation and management. In this paper, we address main requirements and challenges of future cellular networks and introduce network function virtualisation (NFV) with software defined networking (SDN) to realize the self-organizing (SO) scheme. NFV integrates the hardware appliances together in industry standard servers. And SDN performs as core controller of the network. The proposed SO scheme is based on soft fractional frequency reuse (SFFR) framework. The scheme takes different traffic demands into consideration and allocates the power adaptively. Finally the system is proved to be more scalable, energy-saving, and intelligent.",
"title": ""
}
] |
scidocsrr
|
7ccb2e19d5ab09429b32c39a2747f3c3
|
A Cognitive Model of Theory of Mind
|
[
{
"docid": "fdd94d3d9df0171e41179336bd282bdd",
"text": "The authors propose a reinforcement-learning mechanism as a model for recurrent choice and extend it to account for skill learning. The model was inspired by recent research in neurophysiological studies of the basal ganglia and provides an integrated explanation of recurrent choice behavior and skill learning. The behavior includes effects of differential probabilities, magnitudes, variabilities, and delay of reinforcement. The model can also produce the violation of independence, preference reversals, and the goal gradient of reinforcement in maze learning. An experiment was conducted to study learning of action sequences in a multistep task. The fit of the model to the data demonstrated its ability to account for complex skill learning. The advantages of incorporating the mechanism into a larger cognitive architecture are discussed.",
"title": ""
},
{
"docid": "9fb06b9431ddebcad14ac970ec3baa20",
"text": "We use a new model of metarepresentational development to predict a cognitive deficit which could explain a crucial component of the social impairment in childhood autism. One of the manifestations of a basic metarepresentational capacity is a ‘theory of mind’. We have reason to believe that autistic children lack such a ‘theory’. If this were so, then they would be unable to impute beliefs to others and to predict their behaviour. This hypothesis was tested using Wimmer and Perner’s puppet play paradigm. Normal children and those with Down’s syndrome were used as controls for a group of autistic children. Even though the mental age of the autistic children was higher than that of the controls, they alone failed to impute beliefs to others. Thus the dysfunction we have postulated and demonstrated is independent of mental retardation and specific to autism.",
"title": ""
}
] |
[
{
"docid": "2710599258f440d27efe958ed2cfb576",
"text": "In this paper, we present an evaluation of learning algorithms of a novel rule evaluation support method for postprocessing of mined results with rule evaluation models based on objective indices. Post-processing of mined results is one of the key processes in a data mining process. However, it is difficult for human experts to completely evaluate several thousands of rules from a large dataset with noises. To reduce the costs in such rule evaluation task, we have developed the rule evaluation support method with rule evaluation models, which learn from objective indices for mined classification rules and evaluations by a human expert for each rule. To enhance adaptability of rule evaluation models, we introduced a constructive meta-learning system to choose proper learning algorithms. Then, we have done the case study on the meningitis data mining as an actual problem",
"title": ""
},
{
"docid": "f3f4d14366b8f15a9424ed2ce47dd9da",
"text": "As mobile devices become location-aware, they offer the promise of powerful new applications. While computers work with physical locations like latitude and longitude, people think and speak in terms of places, like \"my office\" or ``Sue's house''. Therefore, location-aware applications must incorporate the notion of places to achieve their full potential. This requires systems to acquire the places that are meaningful for each user. Previous work has explored algorithms to discover personal places from location data. However, we know of no empirical, quantitative evaluations of these algorithms, so the question of how well they work currently is unanswered. We report here on an experiment that begins to provide an answer; we show that a place discovery algorithm can do a good job of discovering places that are meaningful to users. The results have important implications for system design and open up interesting avenues for future research.",
"title": ""
},
{
"docid": "e87a52f3e4f3c08838a2eff7501a12e5",
"text": "A coordinated approach to digital forensic readiness (DFR) in a large organisation requires the management and monitoring of a wide variety of resources, both human and technical. The resources involved in DFR in large organisations typically include staff from multiple departments and business units, as well as network infrastructure and computing platforms. The state of DFR within large organisations may therefore be adversely affected if the myriad human and technical resources involved are not managed in an optimal manner. This paper contributes to DFR by proposing the novel concept of a digital forensic readiness management system (DFRMS). The purpose of a DFRMS is to assist large organisations in achieving an optimal level of management for DFR. In addition to this, we offer an architecture for a DFRMS. This architecture is based on requirements for DFR that we ascertained from an exhaustive review of the DFR literature. We describe the architecture in detail and show that it meets the requirements set out in the DFR literature. The merits and disadvantages of the architecture are also discussed. Finally, we describe and explain an early prototype of a DFRMS.",
"title": ""
},
{
"docid": "41e714ba7f26bfab161863b8033d8ffe",
"text": "Power line inspection and maintenance is a slowly but surely emerging field for robotics. This paper describes the control scheme implemented in LineScout technology, one of the first teleoperated obstacle crossing systems that has progressed to the stage of actually performing very-high-voltage power line jobs. Following a brief overview of the hardware and software architecture, key challenges associated with the objectives of achieving reliability, robustness and ease of operation are presented. The coordinated control through visual feedback of all motors needed for obstacle crossing calls for a coherent strategy, an effective graphical user interface and rules to ensure safe, predictable operation. Other features such as automatic weight balancing are introduced to lighten the workload and let the operator concentrate on inspecting power line components. Open architecture was considered for progressive improvements. The features required to succeed in making power line robots fully autonomous are also discussed.",
"title": ""
},
{
"docid": "9c25a2e343e9e259a9881fd13983c150",
"text": "Advances in cognitive, affective, and social neuroscience raise a host of new questions concerning the ways in which neuroscience can and should be used. These advances also challenge our intuitions about the nature of humans as moral and spiritual beings. Neuroethics is the new field that grapples with these issues. The present article surveys a number of applications of neuroscience to such diverse arenas as marketing, criminal justice, the military, and worker productivity. The ethical, legal, and societal effects of these applications are discussed. Less practical, but perhaps ultimately more consequential, is the impact of neuroscience on our worldview and our understanding of the human person.",
"title": ""
},
{
"docid": "54f5af4ced366eeebccc973a081497e2",
"text": "Visual quality of color images is an important aspect in various applications of digital image processing and multimedia. A large number of visual quality metrics (indices) has been proposed recently. In order to assess their reliability, several databases of color images with various sets of distortions have been exploited. Here we present a new database called TID2013 that contains a larger number of images. Compared to its predecessor TID2008, seven new types and one more level of distortions are included. The need for considering these new types of distortions is briefly described. Besides, preliminary results of experiments with a large number of volunteers for determining the mean opinion score (MOS) are presented. Spearman and Kendall rank order correlation factors between MOS and a set of popular metrics are calculated and presented. Their analysis shows that adequateness of the existing metrics is worth improving. Special attention is to be paid to accounting for color information and observers focus of attention to locally active areas in images.",
"title": ""
},
{
"docid": "bb162a51c55fcb2d9b81d7786bf74da0",
"text": "The recently introduced roofline model plots the performance of executed code against its operational intensity (operations count divided by memory traffic). It also includes two platform-specific performance ceilings: the processor's peak performance and a ceiling derived from the memory bandwidth, which is relevant for code with low operational intensity. The model thus makes more precise the notions of memory- and compute-bound and, despite its simplicity, can provide an insightful visualization of bottlenecks. As such it can be valuable to guide manual code optimization as well as in education. Unfortunately, to date the model has been used almost exclusively with back-of-the-envelope calculations and not with measured data. In this paper we show how to produce roofline plots with measured data on recent generations of Intel platforms. We show how to accurately measure the necessary quantities for a given program using performance counters, including threaded and vectorized code, and for warm and cold cache scenarios. We explain the measurement approach, its validation, and discuss limitations. Finally, we show, to this extent for the first time, a set of roofline plots with measured data for common numerical functions on a variety of platforms and discuss their possible uses.",
"title": ""
},
{
"docid": "354bbe38d4571bf7f1f95453f9958eb6",
"text": "This paper focuses and talks about the wide and varied areas of applications wireless sensor networks have taken over today, right from military surveillance and smart home automation to medical and environmental monitoring. It also gives a gist why security is a primary issue of concern even today for the same, discussing the existing solutions along with outlining the security issues and suggesting possible directions of research over the same. This paper is about the security of wireless sensor networks. These networks create new security threats in comparison to the traditional methods due to some unique characteristics of these networks. A detailed study of the threats, risks and attacks need to be done in order to come up with proper security solutions. Here the paper presents the unique characteristics of these networks and how they pose new security threats. There are several security goals of these networks. These goals and requirements must be kept in mind while designing of security solutions for these networks. It also describes the various attacks that are possible at important layers such as data-link, network, physical and transport layer.",
"title": ""
},
{
"docid": "59ec6a8b6034b79211597d745964b281",
"text": "This paper discusses whether memory technologies can continue advances beyond sub-50nm node especially for DRAM and NAND flash memories. First, the barriers to shrink technology are addressed for DRAM and NAND flash memories, depending on their inherent operation principles. Then, details of technology solutions are introduced and its manufacturability is examined. Beyond 30nm node, It is expected that 3-dimensional transistor scheme is needed for both logic and memory array in addition to the development of new materials and structural technologies",
"title": ""
},
{
"docid": "f649286f5bb37530bbfced0a48513f4f",
"text": "Collobert et al. (2011) showed that deep neural network architectures achieve stateof-the-art performance in many fundamental NLP tasks, including Named Entity Recognition (NER). However, results were only reported for English. This paper reports on experiments for German Named Entity Recognition, using the data from the GermEval 2014 shared task on NER. Our system achieves an F1-measure of 75.09% according to the official metric.",
"title": ""
},
{
"docid": "9c8bc65635a9c8f0d8caf510399377f4",
"text": "El autor es José Luis Ortega, investigador del CSIC y miembro del Laboratorio de Cibermetría, que cuenta con una importante trayectoria investigadora con publicaciones nacionales e internacionales en el ámbito de la cibermetría, la visualización de información y el análisis de redes. La obra está escrita en un inglés claro y sencillo y su título refleja de forma precisa su contenido: los motores de búsqueda académicos.",
"title": ""
},
{
"docid": "d47fe2f028b03b9b10a81d1a71c466ab",
"text": "This paper investigates the system-level performance of downlink non-orthogonal multiple access (NOMA) with power-domain user multiplexing at the transmitter side and successive interference canceller (SIC) on the receiver side. The goal is to clarify the performance gains of NOMA for future LTE (Long-Term Evolution) enhancements, taking into account design aspects related to the LTE radio interface such as, frequency-domain scheduling with adaptive modulation and coding (AMC), and NOMA specific functionalities such as error propagation of SIC receiver, multi-user pairing and transmit power allocation. In particular, a pre-defined user grouping and fixed per-group power allocation are proposed to reduce the overhead associated with power allocation signalling. Based on computer simulations, we show that for both wideband and subband scheduling and both low and high mobility scenarios, NOMA can still provide a hefty portion of its expected gains even with error propagation, and also when the proposed simplified user grouping and power allocation are used.",
"title": ""
},
{
"docid": "92abe28875dbe72fbc16bdf41b324126",
"text": "We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. Grasping a previously unknown object, one for which a 3-d model is not available, is a challenging problem. Further, even if given a model, one still has to decide where to grasp the object. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Given two (or more) images of an object, our algorithm attempts to identify a few points in each image corresponding to good locations at which to grasp the object. This sparse set of points is then triangulated to obtain a 3-d location at which to attempt a grasp. This is in contrast to standard dense stereo, which tries to triangulate every single point in an image (and often fails to return a good 3-d model). Our algorithm for identifying grasp locations from an image is trained via supervised learning, using synthetic images for the training set. We demonstrate this approach on two robotic manipulation platforms. Our algorithm successfully grasps a wide variety of objects, such as plates, tape-rolls, jugs, cellphones, keys, screwdrivers, staplers, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set. We also apply our method to the task of unloading items from dishwashers. 1",
"title": ""
},
{
"docid": "370d5863a967bd40af41183416b4fb58",
"text": "In order to reduce the security risk of a commercial aircraft, passengers are not allowed to take certain items in carry-on baggage. For this reason, human operators are trained to detect prohibited items using a manually controlled baggage screening process. In this paper, we propose the use of a method based on multiple X-ray views to detect some regular prohibited items with very defined shapes and sizes. The method consists of two steps: ‘structure estimation’, to obtain a geometric model of the multiple views from the object to be inspected (a baggage), and ‘parts detection’, to detect the parts of interest (prohibited items). The geometric model is estimated using a structure from motion algorithm. The detection of the parts of interest is performed by an adhoc segmentation algorithm (object dependent) followed by a general tracking algorithm based on geometric and appearance constraints. In order to illustrate the effectiveness of the proposed method, experimental results on detecting regular objects −razor blades and guns− are shown yielding promising results.",
"title": ""
},
{
"docid": "e51ab841a0cc013f88607ebbb65e5d1e",
"text": "Seeds of sunflower (Helianthus annuus) were exposed in batches to static magnetic fields of strength from 0 to 250mT in steps of 50mT for 1-4h in steps of 1h. Treatment of sunflower seeds in these magnetic fields increased the speed of germination, seedling length and seedling dry weight under laboratory germination tests. Of the various treatments, 50 and 200mT for 2h yielded the peak performance. Exposure of seeds to magnetic fields improved seed coat membrane integrity and reduced the cellular leakage and electrical conductivity. Treated seeds planted in soil resulted in statistically higher seedling dry weight, root length, root surface area and root volume in 1-month-old seedlings. In germinating seeds, enzyme activities of alpha-amylase, dehydrogenase and protease were significantly higher in treated seeds in contrast to controls. The higher enzyme activity in magnetic-field-treated sunflower seeds could be triggering the fast germination and early vigor of seedlings.",
"title": ""
},
{
"docid": "2232f81da81ced942da548d0669bafc6",
"text": "Quantitative prediction of quality properties (i.e. extra-functional properties such as performance, reliability, and cost) of software architectures during design supports a systematic software engineering approach. Designing architectures that exhibit a good trade-off between multiple quality criteria is hard, because even after a functional design has been created, many remaining degrees of freedom in the software architecture span a large, discontinuous design space. In current practice, software architects try to find solutions manually, which is time-consuming, can be error-prone and can lead to suboptimal designs. We propose an automated approach to search the design space for good solutions. Starting with a given initial architectural model, the approach iteratively modifies and evaluates architectural models. Our approach applies a multi-criteria genetic algorithm to software architectures modelled with the Palladio Component Model. It supports quantitative performance, reliability, and cost prediction and can be extended to other quantitative quality criteria of software architectures. We validate the applicability of our approach by applying it to an architecture model of a component-based business information system and analyse its quality criteria trade-offs by automatically investigating more than 1200 alternative design candidates.",
"title": ""
},
{
"docid": "02edb85279317752bd86a8fe7f0ccfc0",
"text": "Despite the potential wealth of educational indicators expressed in a student's approach to homework assignments, how students arrive at their final solution is largely overlooked in university courses. In this paper we present a methodology which uses machine learning techniques to autonomously create a graphical model of how students in an introductory programming course progress through a homework assignment. We subsequently show that this model is predictive of which students will struggle with material presented later in the class.",
"title": ""
},
{
"docid": "83cea367e54cfe92718742cacbd61adf",
"text": "We present an analysis into the inner workings of Convolutional Neural Networks (CNNs) for processing text. CNNs used for computer vision can be interpreted by projecting filters into image space, but for discrete sequence inputs CNNs remain a mystery. We aim to understand the method by which the networks process and classify text. We examine common hypotheses to this problem: that filters, accompanied by global max-pooling, serve as ngram detectors. We show that filters may capture several different semantic classes of ngrams by using different activation patterns, and that global max-pooling induces behavior which separates important ngrams from the rest. Finally, we show practical use cases derived from our findings in the form of model interpretability (explaining a trained model by deriving a concrete identity for each filter, bridging the gap between visualization tools in vision tasks and NLP) and prediction interpretability (explaining predictions).",
"title": ""
},
{
"docid": "26b0df7bc7fd6671218f441fe3fe5a5c",
"text": "Existing techniques for disambiguating named entities in text mostly focus on Wikipedia as a target catalog of entities. Yet for many types of entities, such as restaurants and cult movies, relational databases exist that contain far more extensive information than Wikipedia. This paper introduces a new task, called Open-Database Named-Entity Disambiguation (Open-DB NED), in which a system must be able to resolve named entities to symbols in an arbitrary database, without requiring labeled data for each new database. We introduce two techniques for Open-DB NED, one based on distant supervision and the other based on domain adaptation. In experiments on two domains, one with poor coverage by Wikipedia and the other with near-perfect coverage, our Open-DB NED strategies outperform a state-of-the-art Wikipedia NED system by over 25% in accuracy.",
"title": ""
},
{
"docid": "45b52dbfd26ac037f5113b8377540705",
"text": "This paper shows for the first time that is possible to reconstruct the position of rigid objects and to jointly recover affine camera calibration solely from a set of object detections in a video sequence. In practice, this work can be considered as the extension of Tomasi and Kanade factorization method using objects. Instead of using points to form a rank constrained measurement matrix, we can form a matrix with similar rank properties using 2D object detection proposals. In detail, we first fit an ellipse onto the image plane at each bounding box as given by the object detector. The collection of all the ellipses in the dual space is used to create a measurement matrix that gives a specific rank constraint. This matrix can be factorised and metrically upgraded in order to provide the affine camera matrices and the 3D position of the objects as an ellipsoid. Moreover, we recover the full 3D quadric thus giving additional information about object occupancy and 3D pose. Finally, we also show that 2D points measurements can be seamlessly included in the framework to reduce the number of objects required. This last aspect unifies the classical point-based Tomasi and Kanade approach with objects in a unique framework. Experiments with synthetic and real data show the feasibility of our approach for the affine camera case.",
"title": ""
}
] |
scidocsrr
|
1af7f0ad44b6307cb91b77efdb30f8f1
|
Multi-objective workflow grid scheduling using $$\varepsilon $$ ε -fuzzy dominance sort based discrete particle swarm optimization
|
[
{
"docid": "624e78153b58a69917d313989b72e6bf",
"text": "In this article we describe a novel Particle Swarm Optimization (PSO) approach to multi-objective optimization (MOO), called Time Variant Multi-Objective Particle Swarm Optimization (TV-MOPSO). TV-MOPSO is made adaptive in nature by allowing its vital parameters (viz., inertia weight and acceleration coefficients) to change with iterations. This adaptiveness helps the algorithm to explore the search space more efficiently. A new diversity parameter has been used to ensure sufficient diversity amongst the solutions of the non-dominated fronts, while retaining at the same time the convergence to the Pareto-optimal front. TV-MOPSO has been compared with some recently developed multi-objective PSO techniques and evolutionary algorithms for 11 function optimization problems, using different performance measures. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "51b36c7d660d723fad2ee1911ab44295",
"text": "This paper presents an overview of our most recent results concerning the Particle Swarm Optimization (PSO) method. Techniques for the alleviation of local minima, and for detecting multiple minimizers are described. Moreover, results on the ability of the PSO in tackling Multiobjective, Minimax, Integer Programming and ℓ1 errors-in-variables problems, as well as problems in noisy and continuously changing environments, are reported. Finally, a Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.",
"title": ""
}
] |
[
{
"docid": "fa7d10c25602bd71ce1f46f1bb0b3f7a",
"text": "Plastic marine pollution is a major environmental concern, yet a quantitative description of the scope of this problem in the open ocean is lacking. Here, we present a time series of plastic content at the surface of the western North Atlantic Ocean and Caribbean Sea from 1986 to 2008. More than 60% of 6136 surface plankton net tows collected buoyant plastic pieces, typically millimeters in size. The highest concentration of plastic debris was observed in subtropical latitudes and associated with the observed large-scale convergence in surface currents predicted by Ekman dynamics. Despite a rapid increase in plastic production and disposal during this time period, no trend in plastic concentration was observed in the region of highest accumulation.",
"title": ""
},
{
"docid": "e5b9c4594c374d6bf05594d0bda38309",
"text": "An instance I of the Hospitals / Residents problem (HR) [6, 7, 15] involves a set R = {r1, . . . , rn} of residents and a set H = {h1, . . . , hm} of hospitals. Each hospital hj ∈ H has a positive integral capacity, denoted by cj . Also, each resident ri ∈ R has a preference list in which he ranks in strict order a subset of H. A pair (ri, hj) ∈ R ×H is said to be acceptable if hj appears in ri’s preference list; in this case ri is said to find hj acceptable. Similarly each hospital hj ∈ H has a preference list in which it ranks in strict order those residents who find hj acceptable. Given any three agents x, y, z ∈ R ∪ H, x is said to prefer y to z if x finds each of y and z acceptable, and y precedes z on x’s preference list. Let C = ∑ hj∈H cj . Let A denote the set of acceptable pairs in I, and let L = |A|. An assignment M is a subset of A. If (ri, hj) ∈ M , ri is said to be assigned to hj , and hj is assigned ri. For each q ∈ R ∪ H, the set of assignees of q in M is denoted by M(q). If ri ∈ R and M(ri) = ∅, ri is said to be unassigned, otherwise ri is assigned. Similarly, any hospital hj ∈ H is under-subscribed, full or over-subscribed according as |M(hj)| is less than, equal to, or greater than cj , respectively. A matching M is an assignment such that |M(ri)| ≤ 1 for each ri ∈ R and |M(hj)| ≤ cj for each hj ∈ H (i.e., no resident is assigned to an unacceptable hospital, each resident is assigned to at most one hospital, and no hospital is over-subscribed). For notational convenience, given a matching M and a resident ri ∈ R such that M(ri) 6= ∅, where there is no ambiguity the notation M(ri) is also used to refer to the single member of M(ri). A pair (ri, hj) ∈ A\\M blocks a matching M , or is a blocking pair for M , if the following conditions are satisfied relative to M :",
"title": ""
},
{
"docid": "c6051b8e0ab9751cc21a6dcdb195cac6",
"text": "Modeling the entailment relation over sentences is one of the generic problems of natural language understanding. In order to account for this problem, we design a theorem prover for Natural Logic, a logic whose terms resemble natural language expressions. The prover is based on an analytic tableau method and employs syntactically and semantically motivated schematic rules. Pairing the prover with a preprocessor, which generates formulas of Natural Logic from linguistic expressions, results in a proof system for natural language. It is shown that the system obtains a comparable accuracy (≈81%) on the unseen SICK data while achieving the stateof-the-art precision (≈98%).",
"title": ""
},
{
"docid": "8694f84e4e2bd7da1e678a3b38ccd447",
"text": "This paper describes a general methodology for extracting attribute-value pairs from web pages. It consists of two phases: candidate generation, in which syntactically likely attribute-value pairs are annotated; and candidate filtering, in which semantically improbable annotations are removed. We describe three types of candidate generators and two types of candidate filters, all of which are designed to be massively parallelizable. Our methods can handle 1 billion web pages in less than 6 hours with 1,000 machines. The best generator and filter combination achieves 70% F-measure compared to a hand-annotated corpus.",
"title": ""
},
{
"docid": "b0a1a782ce2cbf5f152a52537a1db63d",
"text": "In piezoelectric energy harvesting (PEH), with the use of the nonlinear technique named synchronized switching harvesting on inductor (SSHI), the harvesting efficiency can be greatly enhanced. Furthermore, the introduction of its self-powered feature makes this technique more applicable for standalone systems. In this article, a modified circuitry and an improved analysis for self-powered SSHI are proposed. With the modified circuitry, direct peak detection and better isolation among different units within the circuit can be achieved, both of which result in further removal on dissipative components. In the improved analysis, details in open circuit voltage, switching phase lag, and voltage inversion factor are discussed, all of which lead to a better understanding to the working principle of the self-powered SSHI. Both analyses and experiments show that, in terms of harvesting power, the higher the excitation level, the closer between self-powered and ideal SSHI; at the same time, the more beneficial the adoption of self-powered SSHI treatment in piezoelectric energy harvesting, compared to the standard energy harvesting (SEH) technique.",
"title": ""
},
{
"docid": "947ffeb4fff1ca4ee826d71d4add399e",
"text": "Description bttroductian. A maximal complete subgraph (clique) is a complete subgraph that is not contained in any other complete subgraph. A recent paper [1] describes a number of techniques to find maximal complete subgraphs of a given undirected graph. In this paper, we present two backtracking algorithms, using a branchand-bound technique [4] to cut off branches that cannot lead to a clique. The first version is a straightforward implementation of the basic algorithm. It is mainly presented to illustrate the method used. This version generates cliques in alphabetic (lexicographic) order. The second version is derived from the first and generates cliques in a rather unpredictable order in an attempt to minimize the number of branches to be traversed. This version tends to produce the larger cliques first and to generate sequentially cliques having a large common intersection. The detailed algorithm for version 2 is presented here. Description o f the algorithm--Version 1. Three sets play an important role in the algorithm. (1) The set compsub is the set to be extended by a new point or shrunk by one point on traveling along a branch of the backtracking tree. The points that are eligible to extend compsub, i.e. that are connected to all points in compsub, are collected recursively in the remaining two sets. (2) The set candidates is the set of all points that will in due time serve as an extension to the present configuration of compsub. (3) The set not is the set of all points that have at an earlier stage already served as an extension of the present configuration of compsub and are now explicitly excluded. The reason for maintaining this set trot will soon be made clear. The core of the algorithm consists of a recursively defined extension operator that will be applied to the three sets Just described. It has the duty to generate all extensions of the given configuration of compsub that it can make with the given set of candidates and that do not contain any of the points in not. To put it differently: all extensions of compsub containing any point in not have already been generated. The basic mechanism now consists of the following five steps:",
"title": ""
},
{
"docid": "581d8156bc13ad55ca14f7b91b498a96",
"text": "Feature extraction becomes increasingly important as data grows high dimensional. Autoencoder as a neural network based feature extraction method achieves great success in generating abstract features of high dimensional data. However, it fails to consider the relationships of data samples which may affect experimental results of using original and new features. In this paper, we propose a Relation Autoencoder model considering both data features and their relationships. We also extend it to work with other major autoencoder models including Sparse Autoencoder, Denoising Autoencoder and Variational Autoencoder. The proposed relational autoencoder models are evaluated on a set of benchmark datasets and the experimental results show that considering data relationships can generate more robust features which achieve lower construction loss and then lower error rate in further classification compared to the other variants of autoencoders.",
"title": ""
},
{
"docid": "62f4723d9fb26f23f91e550948edd744",
"text": "Audio codecs for automotive applications and smartphones require up to five stereo channels to achieve effective acoustic noise and echo cancellation, thus demanding ADCs with low power and minimal die area. Zoom-ADCs should be well suited for such applications, since they combine compact and energy-efficient SAR ADCs with low-distortion ΔΣ ADCs to simultaneously achieve high energy efficiency, small die area, and high linearity [1,2]. However, previous implementations were limited to the conversion of quasi-static signals, since the two ADCs were operated sequentially, with a coarse SAR conversion followed by, a much slower, fine ΔΣ conversion. This work describes a zoom-ADC with a 20kHz bandwidth, which achieves 107.5dB DR and 104.4dB SNR while dissipating 1.65mW and occupying 0.16mm2. A comparison with recent state-of-the-art ADCs with similar resolution and bandwidth [3-7] shows that the ADC achieves significantly improved energy and area efficiency. These advances are enabled by the use of concurrent fine and coarse conversions, dynamic error-correction techniques, and an inverter-based OTA.",
"title": ""
},
{
"docid": "80bfff01fbb1f6453b37d39b3b8b63f8",
"text": "We consider regularized empirical risk minimization problems. In particular, we minimize the sum of a smooth empirical risk function and a nonsmooth regularization function. When the regularization function is block separable, we can solve the minimization problems in a randomized block coordinate descent (RBCD) manner. Existing RBCD methods usually decrease the objective value by exploiting the partial gradient of a randomly selected block of coordinates in each iteration. Thus they need all data to be accessible so that the partial gradient of the block gradient can be exactly obtained. However, such a \"batch\" setting may be computationally expensive in practice. In this paper, we propose a mini-batch randomized block coordinate descent (MRBCD) method, which estimates the partial gradient of the selected block based on a mini-batch of randomly sampled data in each iteration. We further accelerate the MRBCD method by exploiting the semi-stochastic optimization scheme, which effectively reduces the variance of the partial gradient estimators. Theoretically, we show that for strongly convex functions, the MRBCD method attains lower overall iteration complexity than existing RBCD methods. As an application, we further trim the MRBCD method to solve the regularized sparse learning problems. Our numerical experiments shows that the MRBCD method naturally exploits the sparsity structure and achieves better computational performance than existing methods.",
"title": ""
},
{
"docid": "153b5c38978c54391bd5ec097416883c",
"text": "Applying simple natural language processing methods on social media data have shown to be able to reveal insights of specific mental disorders. However, few studies have employed fine-grained sentiment or emotion related analysis approaches in the detection of mental health conditions from social media messages. This work, for the first time, employed fine-grained emotions as features and examined five popular machine learning classifiers in the task of identifying users with selfreported mental health conditions (i.e. Bipolar, Depression, PTSD, and SAD) from the general public. We demonstrated that the support vector machines and the random forests classifiers with emotion-based features and combined features showed promising improvements to the performance on this task.",
"title": ""
},
{
"docid": "f017d6dff147f00fcbb2356e4fd9e06f",
"text": "In this paper, an index based on customer perspective is proposed for evaluating transit service quality. The index, named Heterogeneous Customer Satisfaction Index, is inspired by the traditional Customer Satisfaction Index, but takes into account the heterogeneity among the user judgments about the different service aspects. The index allows service quality to be monitored, the causes generating customer satisfaction/dissatisfaction to be identified, and the strategies for improving the service quality to be defined. The proposed methodologies show some advantages compared to the others adopted for measuring service quality, because it can be easily applied by the transit operators. Introduction Transit service quality is an aspect markedly influencing travel user choices. Customers who have a good experience with transit will probably use transit services again, while customers who experience problems with transit may not use transit services the next time. For this reason, improving service quality is important for customizing habitual travellers and for attracting new users. Moreover, the need for supplying services characterized by high levels of quality guarantees competition among transit agencies, and, consequently, the user takes advantage of Journal of Public Transportation, Vol. 12, No. 3, 2009 22 better services. To achieve these goals, transit agencies must measure their performance. Customer satisfaction represents a measure of company performance according to customer needs (Hill et al. 2003); therefore, the measure of customer satisfaction provides a service quality measure. Customers express their points of view about the services by providing judgments on some service aspects by means of ad hoc experimental sample surveys, known in the literature as “customer satisfaction surveys.” The aspects generally describing transit services can be distinguished into the characteristics that more properly describe the service (e.g., service frequency), and less easily measurable characteristics that depend more on customer tastes (e.g., comfort). In the literature, there are many studies about transit service quality. Examples of the most recent research are reported in TRB (2003a, 2003b), Eboli and Mazzulla (2007), Tyrinopoulos and Antoniou (2008), Iseki and Taylor (2008), and Joewono and Kubota (2007). In these studies, different attributes determining transit service quality are discussed; the main service aspects characterizing a transit service include service scheduling and reliability, service coverage, information, comfort, cleanliness, and safety and security. Service scheduling can be defined by service frequency (number of runs per hour or per day) and service time (time during which the service is available). Service reliability concerns the regularity of runs that are on schedule and on time; an unreliable service does not permit user travel times to be optimized. Service coverage concerns service availability in the space and is expressed through line path characteristics, number of stops, distance between stops, and accessibility of stops. Information consists of indications about departure and arrival scheduled times of the runs, boarding/alighting stop location, ticket costs, and so on. Comfort refers to passenger personal comfort while transit is used, including climate control, seat comfort, ride comfort including the severity of acceleration and braking, odors, and vehicle noise. Cleanliness refers to the internal and external cleanliness of vehicles and cleanliness of terminals and stops. Safety concerns the possibility that users can be involved in an accident, and security concerns personal security against crimes. Other service aspects characterizing transit services concern fares, personnel appearance and helpfulness, environmental protection, and customer services such ease of purchasing tickets and administration of complaints. The objective of this research is to provide a tool for measuring the overall transit service quality, taking into account user judgments about different service aspects. A New Customer Satisfaction Index for Evaluating Transit Service Quality 23 A synthetic index of overall satisfaction is proposed, which easily can be used by transit agencies for monitoring service performance. In the next section, a critical review of indexes for measuring service quality from a user perspective is made; observations and remarks emerge from the comparison among the indexes analysed. Because of the disadvantages of the indexes reported in the literature, a new index is proposed. The proposed methodology is applied by using experimental data collected by a customer satisfaction survey of passengers of a suburban transit service. The obtained results are discussed at the end of the paper. Customer Satisfaction Indexes The concept of customer satisfaction as a measure of perceived service quality was introduced in market research. In this field, many customer satisfaction techniques have been developed. The best known and most widely applied technique is the ServQual method, proposed by Parasuraman et al. (1985). The ServQual method introduced the concept of customer satisfaction as a function of customer expectations (what customers expect from the service) and perceptions (what customers receive). The method was developed to assess customer perceptions of service quality in retail and service organizations. In the method, 5 service quality dimensions and 22 items for measuring service quality are defined. Service quality dimensions are tangibles, reliability, responsiveness, assurance, and empathy. The method is in the form of a questionnaire that uses a Likert scale on seven levels of agreement/disagreement (from “strongly disagree” to “strongly agree”). ServQual provides an index calculated through the difference between perception and expectation rates expressed for the items, weighted as a function of the five service quality dimensions embedding the items. Some variations of this method were introduced in subsequent years. For example, Cronin and Taylor (1994) introduced the ServPerf method, and Teas (1993) proposed a model named Normed Quality (NQ). Although ServQual represents the most widely adopted method for measuring service quality, the adopted scale of measurement for capturing customer judgments has some disadvantages in obtaining an overall numerical measure of service quality; in fact, to calculate an index, the analyst is forced to assign a numerical code to each level of judgment. In this way, equidistant numbers are assigned to each qualitative point of the scale; this operation presumes that the distances between two consecutive levels of judgment expressed by the customers have the same size. Journal of Public Transportation, Vol. 12, No. 3, 2009 24 A number of both national and international indexes also based on customer perceptions and expectations have been introduced in the last decade. For the most part, these satisfaction indexes are embedded within a system of cause-and-effect relationships or satisfaction models. The models also contain latent or unobservable variables and provide a reliable satisfaction index (Johnson et al. 2001). The Swedish Customer Satisfaction Barometer (SCSB) was established in 1989 and is the first national customer satisfaction index for domestically purchased and consumed products and services (Fornell 1992). The American Customer Satisfaction Index (ACSI) was introduced in the fall of 1994 (Fornell et al. 1996). The Norwegian Customer Satisfaction Barometer (NCSB) was introduced in 1996 (Andreassen and Lervik 1999; Andreassen and Lindestad 1998). The most recent development among these indexes is the European Customer Satisfaction Index (ECSI) (Eklof 2000). The original SCSB model is based on customer perceptions and expectations regarding products or services. All the other models are based on the same concepts, but they differ from the original regarding the variables considered and the cause-and-effect relationships introduced. The models from which these indexes are derived have a very complex structure. In addition, model coefficient estimation needs of large quantities of experimental data and the calibration procedure are not easily workable. For this reason, this method is not very usable by transit agencies, particularly for monitoring service quality. More recently, an index based on discrete choice models and random utility theory has been introduced. The index, named Service Quality Index (SQI), is calculated by the utility function of a choice alternative representing a service (Hensher and Prioni 2002). The user makes a choice between the service habitually used and hypothetical services. Hypothetical services are defined through Stated Preferences (SP) techniques by varying the level of quality of aspects characterizing the service. Habitual service is described by the user by assigning a value to each service aspect. The design of this type of SP experiments is generally very complex; an example of an SP experimental design was introduced by Eboli and Mazzulla (2008a). SQI was firstly calculated by a Multinomial Logit model to evaluate the level of quality of transit services. Hierarchical Logit models were introduced for calculating SQI by Hensher et al. (2003) and Marcucci and Gatta (2007). Mixed Logit models were introduced by Hensher (2001) and Eboli and Mazzulla (2008b). SQI includes, indirectly, the concept of satisfaction as a function of customer expectations and perceptions. The calculation of the indexes following approaches different from SQI presumes the use of customer judgments in terms of rating. To the contrary, SQI is based on choice data; nevertheless, by choosing a service, the user indirectly A New Customer Satisfaction Index for Evaluating Transit Service Quality 25 expresses a judgment of importance on the service aspects defining the services. In addition, the user expres",
"title": ""
},
{
"docid": "63b04046e1136290a97f885783dda3bd",
"text": "This paper considers the design of secondary wireless mesh networks which use leased frequency channels. In a given geographic region, the available channels are individually priced and leased exclusively through a primary spectrum owner. The usage of each channel is also subject to published interference constraints so that the primary user is not adversely affected. When the network is designed and deployed, the secondary user would like to minimize the costs of using the required resources while satisfying its own traffic and interference requirements. This problem is formulated as a mixed integer optimization which gives the optimum deployment cost as a function of the secondary node positioning, routing, and frequency allocations. Because of the problem's complexity, the optimum result can only be found for small problem sizes. To accommodate more practical deployments, two algorithms are proposed and their performance is compared to solutions obtained from the optimization. The first algorithm is a greedy flow-based scheme (GFB) which iterates over the individual node flows based on solving a much simpler optimization at each step. The second algorithm (ILS) uses an iterated local search whose initial solution is based on constrained shortest path routing. Our results show that the proposed algorithms perform well for a variety of network scenarios.",
"title": ""
},
{
"docid": "aab83f305b6519c091f883d869a0b92c",
"text": "With the development of the web of data, recent statistical, data-to-text generation approaches have focused on mapping data (e.g., database records or knowledge-base (KB) triples) to natural language. In contrast to previous grammar-based approaches, this more recent work systematically eschews syntax and learns a direct mapping between meaning representations and natural language. By contrast, I argue that an explicit model of syntax can help support NLG in several ways. Based on case studies drawn from KB-to-text generation, I show that syntax can be used to support supervised training with little training data; to ensure domain portability; and to improve statistical hypertagging.",
"title": ""
},
{
"docid": "1c531bf133d85e534e9a2f7d3c0046da",
"text": "AIM\nThis paper is a report of a study conducted to describe the prevalence and risk factors for lower back pain amongst a variety of Turkish hospital workers including nurses, physicians, physical therapists, technicians, secretaries and hospital aides.\n\n\nBACKGROUND\nHospital workers experience more low back pain than many other groups, the incidence varies among countries. Work activities involving bending, twisting, frequent heavy lifting, awkward static posture and psychological stress are regarded as causal factors for many back injuries.\n\n\nMETHOD\nA 44-item questionnaire was completed by 1600 employees in six hospitals associated with one Turkish university using a cross-sectional survey design. Data were collected over nine months from December 2005 to August 2006 and analysed using Chi square and multivariate logistic regression techniques.\n\n\nFINDINGS\nMost respondents (65.8%) had experienced low back pain, with 61.3% reporting an occurrence within the last 12 months. The highest prevalence was reported by nurses (77.1%) and the lowest amongst secretaries (54.1%) and hospital aides (53.5%). In the majority of cases (78.3%), low back pain began after respondents started working in the hospital, 33.3% of respondents seeking medical care for 'moderate' low back pain while 53.8% (n = 143) had been diagnosed with a herniated lumbar disc. Age, female gender, smoking, occupation, perceived work stress and heavy lifting were statistically significant risk-factors when multivariate logistic regression techniques were conducted (P < 0.05).\n\n\nCONCLUSION\nPreventive measures should be taken to reduce the risk of lower back pain, such as arranging proper rest periods, educational programmes to teach the proper use of body mechanics and smoking cessation programmes.",
"title": ""
},
{
"docid": "ed0736d1f8c35ec8b0c2f5bb9adfb7f9",
"text": "Neff's (2003a, 2003b) notion of self-compassion emphasizes kindness towards one's self, a feeling of connectedness with others, and mindful awareness of distressing experiences. Because exposure to trauma and subsequent posttraumatic stress symptoms (PSS) may be associated with self-criticism and avoidance of internal experiences, the authors examined the relationship between self-compassion and PSS. Out of a sample of 210 university students, 100 endorsed experiencing a Criterion A trauma. Avoidance symptoms significantly correlated with self-compassion, but reexperiencing and hyperarousal did not. Individuals high in self-compassion may engage in less avoidance strategies following trauma exposure, allowing for a natural exposure process.",
"title": ""
},
{
"docid": "11a1c92620d58100194b735bfc18c695",
"text": "Stabilization by static output feedback (SOF) is a long-standing open problem in control: given an n by n matrix A and rectangular matrices B and C, find a p by q matrix K such that A + BKC is stable. Low-order controller design is a practically important problem that can be cast in the same framework, with (p+k)(q+k) design parameters instead of pq, where k is the order of the controller, and k << n. Robust stabilization further demands stability in the presence of perturbation and satisfactory transient as well as asymptotic system response. We formulate two related nonsmooth, nonconvex optimization problems over K, respectively with the following objectives: minimization of the -pseudospectral abscissa of A+BKC, for a fixed ≥ 0, and maximization of the complex stability radius of A + BKC. Finding global optimizers of these functions is hard, so we use a recently developed gradient sampling method that approximates local optimizers. For modest-sized systems, local optimization can be carried out from a large number of starting points with no difficulty. The best local optimizers may then be investigated as candidate solutions to the static output feedback or low-order controller design problem. We show results for two problems published in the control literature. The first is a turbo-generator example that allows us to show how different choices of the optimization objective lead to stabilization with qualitatively different properties, conveniently visualized by pseudospectral plots. The second is a well known model of a Boeing 767 aircraft at a flutter condition. For this problem, we are not aware of any SOF stabilizing K published in the literature. Our method was not only able to find an SOF stabilizing K, but also to locally optimize the complex stability radius of A + BKC. We also found locally optimizing order–1 and order–2 controllers for this problem. All optimizers are visualized using pseudospectral plots.",
"title": ""
},
{
"docid": "a958e1f7d194cc8b1714d3f7d107de2e",
"text": "Following the recent progress in deep learning, researchers and practitioners of machine learning are recognizing the importance of understanding and interpreting what goes on inside these black box models. Recurrent neural networks have recently revolutionized speech recognition and translation, and these powerful models would be very useful in other applications involving sequential data. However, adoption has been slow in applications such as health care, where practitioners are reluctant to let an opaque system make crucial decisions. If we can make the inner workings of RNNs more interpretable, more applications can benefit from their power.",
"title": ""
},
{
"docid": "2b0534f3d659e8eaea4d5b53af4617db",
"text": "Many organisations are currently involved in implementing Sustainable Supply Chain Management (SSCM) initiatives to address societal expectations and government regulations. Implementation of these initiatives has in turn created complexity due to the involvement of collection, management, control, and monitoring of a wide range of additional information exchanges among trading partners, which was not necessary in the past. Organisations thus would rely more on meaningful support from their IT function to help them implement and operate SSCM practices. Given the growing global recognition of the importance of sustainable supply chain (SSC) practices, existing corporate IT strategy and plans need to be revisited for IT to remain supportive and aligned with new sustainability aspirations of their organisations. Towards this goal, in this paper we report on the development of an IT maturity model specifically designed for SSCM context. The model is built based on four dimensions derived from software process maturity and IS/IT planning literatures. Our proposed model defines four progressive IT maturity stages for corporate IT function to support SSCM implementation initiatives. Some implications of the study finding and several challenges that may potentially hinder acceptance of the model by organisations are discussed.",
"title": ""
},
{
"docid": "d719fb1fe0faf76c14d24f7587c5345f",
"text": "This paper describes a framework for the estimation of shape from sparse or incomplete range data. It uses a shape representation called blending, which allows for the geometric combination of shapes into a unified model— selected regions of the component shapes are cut-out and glued together. Estimation of shape using this representation is realized using a physics-based framework, and also includes a process for deciding how to adapt the structure and topology of the model to improve the fit. The blending representation helps avoid abrupt changes in model geometry during fitting by allowing the smooth evolution of the shape, which improves the robustness of the technique. We demonstrate this framework with a series of experiments showing its ability to automatically extract structured representations from range data given both structurally and topologically complex objects. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-97-12. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/47 (appeared inIEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. 20, No. 11, pp. 1186-1205, November 1998) Shape Evolution with Structural and Topological Changes using Blending Douglas DeCarlo and Dimitris Metaxas †",
"title": ""
},
{
"docid": "c74ea692ff383583c2bffe1cd26078e9",
"text": "We propose a toolkit to generate structured synthetic documents emulating the actual document production process. Synthetic documents can be used to train systems to perform document analysis tasks. In our case we address the record counting task on handwritten structured collections containing a limited number of examples. Using the DocEmul toolkit we can generate a larger dataset to train a deep architecture to predict the number of records for each page. The toolkit is able to generate synthetic collections and also perform data augmentation to create a larger trainable dataset. It includes one method to extract the page background from real pages which can be used as a substrate where records can be written on the basis of variable structures and using cursive fonts. Moreover, it is possible to extend the synthetic collection by adding random noise, page rotations, and other visual variations. We performed some experiments on two different handwritten collections using the toolkit to generate synthetic data to train a Convolutional Neural Network able to count the number of records in the real collections.",
"title": ""
}
] |
scidocsrr
|
b9ff915dc7f3e676a4bce3b771eeeaf2
|
From past to future: Temporal self-continuity across the life span.
|
[
{
"docid": "dab84197dec153309bb45368ab730b12",
"text": "Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the conditional relations is often a tedious and error-prone task. This article provides an overview of methods used to probe interaction effects and describes a unified collection of freely available online resources that researchers can use to obtain significance tests for simple slopes, compute regions of significance, and obtain confidence bands for simple slopes across the range of the moderator in the MLR, HLM, and LCA contexts. Plotting capabilities are also provided.",
"title": ""
}
] |
[
{
"docid": "6660bcfd564726421d9eaaa696549454",
"text": "When building intelligent spaces, the knowledge representation for encapsulating rooms, users, groups, roles, and other information is a fundamental design question. We present a semantic network as such a representation, and demonstrate its utility as a basis for ongoing work.",
"title": ""
},
{
"docid": "1c4930b976f35488e9df6ead74358878",
"text": "The covalently modified ureido-conjugated chitosan/TPP multifunctional nanoparticles have been developed as targeted nanomedicine delivery system for eradication of Helicobacter pylori. H. pylori can specifically express the urea transport protein on its membrane to transport urea into cytoplasm for urease to produce ammonia, which protects the bacterium in the acid milieu of stomach. The clinical applicability of topical antimicrobial agent is needed to eradicate H. pylori in the infected fundal area. In this study, we designed and synthesized two ureido-conjugated chitosan derivatives UCCs-1 and UCCs-2 for preparation of multifunctional nanoparticles. The process was optimized in order to prepare UCCs/TPP nanoparticles for encapsulation of amoxicillin. The results showed that the amoxicillin-UCCs/TPP nanoparticles exhibited favorable pH-sensitive characteristics, which could procrastinate the release of amoxicillin at gastric acids and enable the drug to deliver and target to H. pylori at its survival region effectively. Compared with unmodified amoxicillin-chitosan/TPP nanoparticles, a more specific and effective H. pylori growth inhibition was observed for amoxicillin-UCCs/TPP nanoparticles. Drug uptake analysis tested by flow cytometry and confocal laser scanning microscopy verified that the uptake of FITC-UCCs-2/TPP nanoparticles was associated with urea transport protein on the membrane of H. pylori and reduced with the addition of urea as competitive transport substrate. These findings suggest that the multifunctional amoxicillin-loaded nanoparticles have great potential for effective therapy of H. pylori infection. They may also serve as pharmacologically effective nanocarriers for oral targeted delivery of other therapeutic drugs to treat H. pylori.",
"title": ""
},
{
"docid": "c6c4edf88c38275e82aa73a11ef3a006",
"text": "In this paper, we propose a new concept for understanding the role of algorithms in daily life: algorithmic authority. Algorithmic authority is the legitimate power of algorithms to direct human action and to impact which information is considered true. We use this concept to examine the culture of users of Bit coin, a crypto-currency and payment platform. Through Bit coin, we explore what it means to trust in algorithms. Our study utilizes interview and survey data. We found that Bit coin users prefer algorithmic authority to the authority of conventional institutions, which they see as untrustworthy. However, we argue that Bit coin users do not have blind faith in algorithms, rather, they acknowledge the need for mediating algorithmic authority with human judgment. We examine the tension between members of the Bit coin community who would prefer to integrate Bit coin with existing institutions and those who would prefer to resist integration.",
"title": ""
},
{
"docid": "d035c972226a97ec3985cd76bf9afc8c",
"text": "Surfactants and their mixtures can drastically change the interfacial properties and hence are used in many industrial processes such as dispersion/flocculation, flotation, emulsification, corrosion inhibition, cosmetics, drug delivery, chemical mechanical polishing, enhanced oil recovery, and nanolithography. A review of studies on adsorption of single surfactant as well as mixtures of various types (anionic-cationic, anionic-nonionic, cationic-nonionic, cationic-zwitterionic and nonionic-nonionic) is presented here along with mechanisms involved. Results obtained using techniques such as zeta potential, flotation, AFM, specular neutron reflectivity, small angle neutron scattering, fluorescence, ESR, Raman spectroscopy, ellipsometry, HPLC and ATR-IR are reviewed along with those from traditional techniques to elucidate the mechanisms of adsorption and particularly to understand synergistic/antagonistic interactions at solution/liquid interfaces and nanostructures of surface aggregates. In addition, adsorption of several mixed surfactant systems is considered due to their industrial relevance. Finally an attempt is made to derive structure-property relationships to provide a solid foundation for the design and use of surfactant formulations for industrial applications.",
"title": ""
},
{
"docid": "66376f4f1e9e4ec7ca24465c334f4f62",
"text": "Moving object detection and tracking is an evolving research field due to its wide applications in traffic surveillance, 3D reconstruction, motion analysis (human and non-human), activity recognition, medical imaging etc. However real time object tracking is a challenging task due to dynamic tacking environment and different limiting parameters like view point, anthropometric variation, dimensions of an object, cluttered background, camera motions, occlusion etc. In this paper, we have developed new object detection and tracking algorithm which makes use of optical flow in conjunction with motion vector estimation for object detection and tracking in a sequence of frames. The optical flow gives valuable information about the object movement even if no quantitative parameters are computed. The motion vector estimation technique can provide an estimation of object position from consecutive frames which increases the accuracy of this algorithm and helps to provide robust result irrespective of image blur and cluttered background. The use of median filter with this algorithm makes it more robust in the presence of noise. The developed algorithm is applied to wide range of standard and real time datasets with different illumination (indoor and outdoor), object speed etc. The obtained results indicates that the developed algorithm over performs over conventional methods and state of art methods of object tracking.",
"title": ""
},
{
"docid": "c0b27b81cf6475866e6e794bedfee474",
"text": "Nowadays, many e-Commerce tools support customers with automatic recommendations. Many of them are centralized and lack in ef ciency and scalability, while other ones are distributed and require a computational overhead excessive for many devices. Moreover, all the past proposals are not open and do not allow new personalized terms to be introduced into the domain ontology. In this paper, we present a distributed recommender, based on a multi-tiered agent system, trying to face the issues outlined above. The proposed system is able to generate very effective suggestions without a too onerous computational task. We show that our system introduces signi cant advantages in terms of openess, privacy and security.",
"title": ""
},
{
"docid": "4bb4bbd91925d2faafe5516519d6cc62",
"text": "Cyclic GMP (cGMP) modulates important cerebral processes including some forms of learning and memory. cGMP pathways are strongly altered in hyperammonemia and hepatic encephalopathy (HE). Patients with liver cirrhosis show reduced intracellular cGMP in lymphocytes, increased cGMP in plasma and increased activation of soluble guanylate cyclase by nitric oxide (NO) in lymphocytes, which correlates with minimal HE assessed by psychometric tests. Activation of soluble guanylate cyclase by NO is also increased in cerebral cortex, but reduced in cerebellum, from patients who died with HE. This opposite alteration is reproduced in vivo in rats with chronic hyperammonemia or HE. A main pathway modulating cGMP levels in brain is the glutamate-NO-cGMP pathway. The function of this pathway is impaired both in cerebellum and cortex of rats with hyperammonemia or HE. Impairment of this pathway is responsible for reduced ability to learn some types of tasks. Restoring the pathway and cGMP levels in brain restores learning ability. This may be achieved by administering phosphodiesterase inhibitors (zaprinast, sildenafil), cGMP, anti-inflammatories (ibuprofen) or antagonists of GABAA receptors (bicuculline). These data support that increasing cGMP by safe pharmacological means may be a new therapeutic approach to improve cognitive function in patients with minimal or clinical HE.",
"title": ""
},
{
"docid": "cdff0e2d4c0d91ed360569bd28422a1a",
"text": "An antipodal Vivaldi antenna (AVA) with novel symmetric two-layer double-slot structure is proposed. When excited with equiamplitude and opposite phase, the two slots will have the sum vector of their E-field vectors parallel to the antenna’s plane, which is uniform to the E-field vector in the slot of a balanced AVA with three-layer structure. Compared with a typical AVA with the same size, the proposed antenna has better impedance characteristics because of the amelioration introduced by the coupling between the two slots, as well as the more symmetric radiation patterns and the remarkably lowered cross-polarization level at the endfire direction. For validating the analysis, an UWB balun based on the double-sided parallel stripline is designed for realizing the excitation, and a sample of the proposed antenna is fabricated. The measured results reveal that the proposed has an operating frequency range from 2.8 to 15 GHz, in which the cross-polarization level is less than −24.8 dB. Besides, the group delay of two face-to-face samples has a variation less than 0.62 ns, which exhibits the ability of the novel structure for transferring pulse signal with high fidelity. The simple two-layer structure, together with the improvement both in impedance and radiation characteristics, makes the proposed antenna much desirable for the UWB applications.",
"title": ""
},
{
"docid": "2fc2234e6f8f70e0b12f1f72b1d21175",
"text": "Servers and HPC systems often use a strong memory error correction code, or ECC, to meet their reliability and availability requirements. However, these ECCs often require significant capacity and/or power overheads. We observe that since memory channels are independent from one another, error correction typically needs to be performed for one channel at a time. Based on this observation, we show that instead of always storing in memory the actual ECC correction bits as do existing systems, it is sufficient to store the bitwise parity of the ECC correction bits of different channels for fault-free memory regions, and store the actual ECC correction bits only for faulty memory regions. By trading off the resultant ECC capacity overhead reduction for improved memory energy efficiency, the proposed technique reduces memory energy per instruction by 54.4% and 20.6%, respectively, compared to a commercial chipkill correct ECC and a DIMM-kill correct ECC, while incurring similar or lower capacity overheads.",
"title": ""
},
{
"docid": "be3484068f501c0393a69e26f25d9cd6",
"text": "Embedding and projection matrices are commonly used in neural language models (NLM) as well as in other sequence processing networks that operate on large vocabularies. We examine such matrices in fine-tuned language models and observe that a NLM learns word vectors whose norms are related to the word frequencies. We show that by initializing the weight norms with scaled log word counts, together with other techniques, lower perplexities can be obtained in early epochs of training. We also introduce a weight norm regularization loss term, whose hyperparameters are tuned via a grid search. With this method, we are able to significantly improve perplexities on two word-level language modeling tasks (without dynamic evaluation): from 54.44 to 53.16 on Penn Treebank (PTB) and from 61.45 to 60.13 on WikiText-2 (WT2).",
"title": ""
},
{
"docid": "299d59735ea1170228aff531645b5d4a",
"text": "While the economic case for cloud computing is compelling, the security challenges it poses are equally striking. In this work we strive to frame the full space of cloud-computing security issues, attempting to separate justified concerns from possible over-reactions. We examine contemporary and historical perspectives from industry, academia, government, and “black hats”. We argue that few cloud computing security issues are fundamentally new or fundamentally intractable; often what appears “new” is so only relative to “traditional” computing of the past several years. Looking back further to the time-sharing era, many of these problems already received attention. On the other hand, we argue that two facets are to some degree new and fundamental to cloud computing: the complexities of multi-party trust considerations, and the ensuing need for mutual auditability.",
"title": ""
},
{
"docid": "0f208f26191386dd5c868fa3cc7c7b31",
"text": "This paper revisits the data–information–knowledge–wisdom (DIKW) hierarchy by examining the articulation of the hierarchy in a number of widely read textbooks, and analysing their statements about the nature of data, information, knowledge, and wisdom. The hierarchy referred to variously as the ‘Knowledge Hierarchy’, the ‘Information Hierarchy’ and the ‘Knowledge Pyramid’ is one of the fundamental, widely recognized and ‘taken-for-granted’ models in the information and knowledge literatures. It is often quoted, or used implicitly, in definitions of data, information and knowledge in the information management, information systems and knowledge management literatures, but there has been limited direct discussion of the hierarchy. After revisiting Ackoff’s original articulation of the hierarchy, definitions of data, information, knowledge and wisdom as articulated in recent textbooks in information systems and knowledge management are reviewed and assessed, in pursuit of a consensus on definitions and transformation processes. This process brings to the surface the extent of agreement and dissent in relation to these definitions, and provides a basis for a discussion as to whether these articulations present an adequate distinction between data, information, and knowledge. Typically information is defined in terms of data, knowledge in terms of information, and wisdom in terms of knowledge, but there is less consensus in the description of the processes that transform elements lower in the hierarchy into those above them, leading to a lack of definitional clarity. In addition, there is limited reference to wisdom in these texts.",
"title": ""
},
{
"docid": "e53135112cea5bc48e0b7fef4bb20d33",
"text": "Netflix is the leading provider of on-demand Internet video streaming in the US and Canada, accounting for 29.7% of the peak downstream traffic in US. Understanding the Netflix architecture and its performance can shed light on how to best optimize its design as well as on the design of similar on-demand streaming services. In this paper, we perform a measurement study of Netflix to uncover its architecture and service strategy. We find that Netflix employs a blend of data centers and Content Delivery Networks (CDNs) for content distribution. We also perform active measurements of the three CDNs employed by Netflix to quantify the video delivery bandwidth available to users across the US. Finally, as improvements to Netflix's current CDN assignment strategy, we propose a measurement-based adaptive CDN selection strategy and a multiple-CDN-based video delivery strategy, and demonstrate their potentials in significantly increasing user's average bandwidth.",
"title": ""
},
{
"docid": "16cd40642b6179cbf08ed09577c12bc9",
"text": "Considerable scientific and technological efforts have been devoted to develop neuroprostheses and hybrid bionic systems that link the human nervous system with electronic or robotic prostheses, with the main aim of restoring motor and sensory functions in disabled patients. A number of neuroprostheses use interfaces with peripheral nerves or muscles for neuromuscular stimulation and signal recording. Herein, we provide a critical overview of the peripheral interfaces available and trace their use from research to clinical application in controlling artificial and robotic prostheses. The first section reviews the different types of non-invasive and invasive electrodes, which include surface and muscular electrodes that can record EMG signals from and stimulate the underlying or implanted muscles. Extraneural electrodes, such as cuff and epineurial electrodes, provide simultaneous interface with many axons in the nerve, whereas intrafascicular, penetrating, and regenerative electrodes may contact small groups of axons within a nerve fascicle. Biological, technological, and material science issues are also reviewed relative to the problems of electrode design and tissue injury. The last section reviews different strategies for the use of information recorded from peripheral interfaces and the current state of control neuroprostheses and hybrid bionic systems.",
"title": ""
},
{
"docid": "b7957cc83988e0be2da64f6d9837419c",
"text": "Description: A revision of the #1 text in the Human Computer Interaction field, Interaction Design, the third edition is an ideal resource for learning the interdisciplinary skills needed for interaction design, human-computer interaction, information design, web design and ubiquitous computing. The authors are acknowledged leaders and educators in their field, with a strong global reputation. They bring depth of scope to the subject in this new edition, encompassing the latest technologies and devices including social networking, Web 2.0 and mobile devices. The third edition also adds, develops and updates cases, examples and questions to bring the book in line with the latest in Human Computer Interaction. Interaction Design offers a cross-disciplinary, practical and process-oriented approach to Human Computer Interaction, showing not just what principles ought to apply to Interaction Design, but crucially how they can be applied. The book focuses on how to design interactive products that enhance and extend the way people communicate, interact and work. Motivating examples are included to illustrate both technical, but also social and ethical issues, making the book approachable and adaptable for both Computer Science and non-Computer Science users. Interviews with key HCI luminaries are included and provide an insight into current and future trends.",
"title": ""
},
{
"docid": "c668a3ca2117729a6cbbd0bc932a97f8",
"text": "An inescapable bottleneck with learning from large data sets is the high cost of labeling training data. Unsupervised learning methods have promised to lower the cost of tagging by leveraging notions of similarity among data points to assign tags. However, unsupervised and semi-supervised learning techniques often provide poor results due to errors in estimation. We look at methods that guide the allocation of human effort for labeling data so as to get the greatest boosts in discriminatory power with increasing amounts of work. We focus on the application of value of information to Gaussian Process classifiers and explore the effectiveness of the method on the task of classifying voice messages.",
"title": ""
},
{
"docid": "42ea21063c05c17a690f2aca898160f2",
"text": "Braided pneumatic artificial muscles, and in particular the better known type with a double helical braid usually called the McKibben muscle, seem to be at present the best means for motorizing robotarms with artificial muscles. Their ability to develop high maximum force associated with lightness and a compact cylindrical shape, as well as their analogical behavior with natural skeletal muscle were very well emphasized in the 1980s by the development of the Bridgestone “soft robot” actuated by “rubbertuators”. Recent publications have presented ways for modeling McKibben artificial muscle as well as controlling its highly non-linear dynamic behavior. However, fewer studies have concentrated on analyzing the integration of artificial muscles with robot-arm architectures since the first Bridgestone prototypes were designed. In this paper we present the design of a 7R anthropomorphic robot-arm entirely actuated by antagonistic McKibben artificial muscle pairs. The validation of the robot-arm architecture was performed in a teleoperation mode. KEY WORDS—anthropomorphic robot-arm, artificial muscle, McKibben muscle",
"title": ""
},
{
"docid": "30279db171fffe6fac561541a5d175ca",
"text": "Deformable displays can provide two major benefits compared to rigid displays: Objects of different shapes and deformabilities, situated in our physical environment, can be equipped with deformable displays, and users can benefit from their pre-existing knowledge about the interaction with physical objects when interacting with deformable displays. In this article we present InformationSense, a large, highly deformable cloth display. The article contributes to two research areas in the context of deformable displays: It presents an approach for the tracking of large, highly deformable surfaces, and it presents one of the first UX analyses of cloth displays that will help with the design of future interaction techniques for this kind of display. The comparison of InformationSense with a rigid display interface unveiled the trade-off that while users are able to interact with InformationSense more naturally and significantly preferred InformationSense in terms of joy of use, they preferred the rigid display interfaces in terms of efficiency. This suggests that deformable displays are already suitable if high hedonic qualities are important but need to be enhanced with additional digital power if high pragmatic qualities are required.",
"title": ""
},
{
"docid": "4d5ba0bc7146518d5c59d7c535d0415e",
"text": "We introduce Opcodes, a Python package which presents x86 and x86-64 instruction sets as a set of high-level objects. Opcodes provides information about instruction names, implicit and explicit operands, and instruction encoding. We use the Opcodes package to auto-generate instruction classes for PeachPy, an x86-64 assembler embedded in Python, and enable new functionality.\n The new PeachPy functionality lets low-level optimization experts write high-performance assembly kernels in Python, load them as callable Python functions, test the kernels using numpy and generate object files for Windows, Linux, and Mac OS X entirely within Python. Additionally, the new PeachPy can generate and run assembly code inside Chromium-based browsers by leveraging Native Client technology. Beyond that, PeachPy gained ability to target Google Go toolchain, by generating either source listing for Go assembler, or object files that can be linked with Go toolchain.\n With backends for Windows, Linux, Mac OS X, Native Client, and Go, PeachPy is the most portable way to write high-performance kernels for x86-64 architecture.",
"title": ""
},
{
"docid": "a94faf576951e47f3e04e791088a316c",
"text": "Barber-Say syndrome is a rare disorder characterized by hypertrichosis, redundant skin, and facial dysmorphism. TWIST2 gene mutation previously described in this syndrome was identified in our patient. Genetic testing is recommended in patients presenting with these phenotypic abnormalities, along with their parents, to establish de novo or inherited mutations.",
"title": ""
}
] |
scidocsrr
|
98b98e15d879db6e52e2d33f43cd1413
|
Strong Federations: An Interoperable Blockchain Solution to Centralized Third Party Risks
|
[
{
"docid": "2f2c36452ab45c4234904d9b11f28eb7",
"text": "Bitcoin is a potentially disruptive new crypto-currency based on a decentralized opensource protocol which is gradually gaining popularity. Perhaps the most important question that will affect Bitcoin’s success, is whether or not it will be able to scale to support the high volume of transactions required from a global currency system. We investigate the restrictions on the rate of transaction processing in Bitcoin as a function of both the bandwidth available to nodes and the network delay, both of which lower the efficiency of Bitcoin’s transaction processing. The security analysis done by Bitcoin’s creator Satoshi Nakamoto [12] assumes that block propagation delays are negligible compared to the time between blocks—an assumption that does not hold when the protocol is required to process transactions at high rates. We improve upon the original analysis and remove this assumption. Using our results, we are able to give bounds on the number of transactions per second the protocol can handle securely. Building on previously published measurements by Decker and Wattenhofer [5], we show these bounds are currently more restrictive by an order of magnitude than the bandwidth needed to stream all transactions. We additionally show how currently planned improvements to the protocol, namely the use of transaction hashes in blocks (instead of complete transaction records), will dramatically alleviate these restrictions. Finally, we present an easily implementable modification to the way Bitcoin constructs its main data structure, the blockchain, that immensely improves security from attackers, especially when the network operates at high rates. This improvement allows for further increases in the number of transactions processed per second. We show that with our proposed modification, significant speedups can be gained in confirmation time of transactions as well. The block generation rate can be securely increased to more than one block per second – a 600 fold speedup compared to today’s rate, while still allowing the network to processes many transactions per second.",
"title": ""
},
{
"docid": "0f0799a04328852b8cfa742cbc2396c9",
"text": "Bitcoin does not scale, because its synchronization mechanism, the blockchain, limits the maximum rate of transactions the network can process. However, using off-blockchain transactions it is possible to create long-lived channels over which an arbitrary number of transfers can be processed locally between two users, without any burden to the Bitcoin network. These channels may form a network of payment service providers (PSPs). Payments can be routed between any two users in real time, without any confirmation delay. In this work we present a protocol for duplex micropayment channels, which guarantees end-to-end security and allow instant transfers, laying the foundation of the PSP network.",
"title": ""
}
] |
[
{
"docid": "98fb03e0e590551fa9e7c82b827c78ed",
"text": "This article describes on-going developments of the VENUS European Project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu) concerning the first mission to sea in Pianosa Island, Italy in October 2006. The VENUS project aims at providing scientific methodologies and technological tools for the virtual exploration of deep underwater archaeological sites. The VENUS project will improve the accessibility of underwater sites by generating thorough and exhaustive 3D records for virtual exploration. In this paper we focus on the underwater photogrammetric approach used to survey the archaeological site of Pianosa. After a brief presentation of the archaeological context we shall see the calibration process in such a context. The next part of this paper is dedicated to the survey: it is divided into two parts: a DTM of the site (combining acoustic bathymetry and photogrammetry) and a specific artefact plotting dedicated to the amphorae present on the site. * Corresponding author. This is useful to know for communication with the appropriate person in cases with more than one author. ** http://cordis.europa.eu/ist/digicult/venus.htm or the project web site : http://www.venus-project.eu 1. VENUS, VIRTUAL EXPLORATION OF UNDERWATER SITES The VENUS project is funded by European Commission, Information Society Technologies (IST) programme of the 6th FP for RTD . It aims at providing scientific methodologies and technological tools for the virtual exploration of deep underwater archaeological sites. (Chapman et alii, 2006). Underwater archaeological sites, for example shipwrecks, offer extraordinary opportunities for archaeologists due to factors such as darkness, low temperatures and a low oxygen rate which are favourable to preservation. On the other hand, these sites can not be experienced first hand and today are continuously jeopardised by activities such as deep trawling that destroy their surface layer. The VENUS project will improve the accessibility of underwater sites by generating thorough and exhaustive 3D records for virtual exploration. The project team plans to survey shipwrecks at various depths and to explore advanced methods and techniques of data acquisition through autonomous or remotely operated unmanned vehicles with innovative sonar and photogrammetry equipment. Research will also cover aspects such as data processing and storage, plotting of archaeological artefacts and information system management. This work will result in a series of best practices and procedures for collecting and storing data. Further, VENUS will develop virtual reality and augmented reality tools for the visualisation of an immersive interaction with a digital model of an underwater site. The model will be made accessible online, both as an example of digital preservation and for demonstrating new facilities of exploration in a safe, cost-effective and pedagogical environment. The virtual underwater site will provide archaeologists with an improved insight into the data and the general public with simulated dives to the site. The VENUS consortium, composed of eleven partners, is pooling expertise in various disciplines: archaeology and underwater exploration, knowledge representation and photogrammetry, virtual reality and digital data preservation. This paper focuses on the first experimentation in Pianosa Island, Tuscany, Italy. The document is structured as follows. A short description of the archaeological context, then the next section explains the survey method: calibration, collecting photographs using ROV and divers, photographs orientation and a particular way to measure amphorae with photogrammetry using archaeological knowledge. A section shows 3D results in VRML and finally we present the future planned work. 2. THE UNDERWATER ARCHAEOLOGICAL SITE OF PIANOSA ISLAND The underwater archaeological site of Pianosa, discovered in 1989 by volunteer divers (Giuseppe Adriani, Paolo Vaccari), is located at a depth of 35 m, close to the Scoglio della Scola, in XXI International CIPA Symposium, 01-06 October, Athens, Greece",
"title": ""
},
{
"docid": "79caff0b1495900b5c8f913562d3e84d",
"text": "We propose a formal model of web security based on an abstraction of the web platform and use this model to analyze the security of several sample web mechanisms and applications. We identify three distinct threat models that can be used to analyze web applications, ranging from a web attacker who controls malicious web sites and clients, to stronger attackers who can control the network and/or leverage sites designed to display user-supplied content. We propose two broadly applicable security goals and study five security mechanisms. In our case studies, which include HTML5 forms, Referer validation, and a single sign-on solution, we use a SAT-based model-checking tool to find two previously known vulnerabilities and three new vulnerabilities. Our case study of a Kerberos-based single sign-on system illustrates the differences between a secure network protocol using custom client software and a similar but vulnerable web protocol that uses cookies, redirects, and embedded links instead.",
"title": ""
},
{
"docid": "13d9b338b83a5fcf75f74607bf7428a7",
"text": "We extend the neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing trainable address vectors. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies, including both linear and nonlinear ones. We implement the D-NTM with both continuous and discrete read and write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU controller. We provide extensive analysis of our model and compare different variations of neural Turing machines on this task. We show that our model outperforms long short-term memory and NTM variants. We provide further experimental results on the sequential MNIST, Stanford Natural Language Inference, associative recall, and copy tasks.",
"title": ""
},
{
"docid": "93d4e6aba0ef5c17bb751ff93f0d3848",
"text": "In this work we propose a new SIW structure, called the corrugated SIW (CSIW), which does not require conducting vias to achieve TE10 type boundary conditions at the side walls. Instead, the vias are replaced by quarter wavelength microstrip stubs arranged in a corrugated pattern on the edges of the waveguide. This, along with series interdigitated capacitors, results in a waveguide section comprising two separate conductors, which facilitates shunt connection of active components such as Gunn diodes.",
"title": ""
},
{
"docid": "c1e1d4bf69a9a3de470aa8d7574b5fb5",
"text": "An agent that can see everyday scenes and fluently communicate with people is one of the ambitious goals of artificial intelligence. To achieve that, it is crucial to exploit visually-grounded information and capture subtle nuances from human conversation. To this end, Visual Dialog (VisDial) task has been introduced. In this paper, we propose a new model for visual dialog. Our model employs Bilinear Attention Network (BAN) and Embeddings from Language Models (ELMo) to exploit visually-grounded information and context of dialogs, respectively. Our proposed model outperforms previous state-of-the-art on VisDial v1.0 dataset by a significant margin (5.33% on recall @10)",
"title": ""
},
{
"docid": "1326be667e3ec3aa6bf0732ef97c230a",
"text": "Recognizing human activities in a sequence is a challenging area of research in ubiquitous computing. Most approaches use a fixed size sliding window over consecutive samples to extract features— either handcrafted or learned features—and predict a single label for all samples in the window. Two key problems emanate from this approach: i) the samples in one window may not always share the same label. Consequently, using one label for all samples within a window inevitably lead to loss of information; ii) the testing phase is constrained by the window size selected during training while the best window size is difficult to tune in practice. We propose an efficient algorithm that can predict the label of each sample, which we call dense labeling, in a sequence of human activities of arbitrary length using a fully convolutional network. In particular, our approach overcomes the problems posed by the sliding window step. Additionally, our algorithm learns both the features and classifier automatically. We release a new daily activity dataset based on a wearable sensor with hospitalized patients. We conduct extensive experiments and demonstrate that our proposed approach is able to outperform the state-of-the-arts in terms of classification and label misalignment measures on three challenging datasets: Opportunity, Hand Gesture, and our new dataset.",
"title": ""
},
{
"docid": "14621a077d584d839829aeb8020f196e",
"text": "In this paper an open-domain factoid question answering system for Polish, RAFAEL, is presented. The system goes beyond finding an answering sentence; it also extracts a single string, corresponding to the required entity. Herein the focus is placed on different approaches to entity recognition, essential for retrieving information matching question constraints. Apart from traditional approach, including named entity recognition (NER) solutions, a novel technique, called Deep Entity Recognition (DeepER), is introduced and implemented. It allows a comprehensive search of all forms of entity references matching a given WordNet synset (e.g. an impressionist), based on a previously assembled entity library. It has been created by analysing the first sentences of encyclopaedia entries and disambiguation and redirect pages. DeepER also provides automatic evaluation, which makes possible numerous experiments, including over a thousand questions from a quiz TV show answered on the grounds of Polish Wikipedia. The final results of a manual evaluation on a separate question set show that the strength of DeepER approach lies in its ability to answer questions that demand answers beyond the traditional categories of named entities.",
"title": ""
},
{
"docid": "cf0c4ebe8e2d3e8b8dcb668418a39374",
"text": "Despite the progress in Internet of Things (IoT) research, a general software engineering approach for systematic development of IoT systems and applications is still missing. A synthesis of the state of the art in the area can help frame the key abstractions related to such development. Such a framework could be the basis for guidelines for IoT-oriented software engineering.",
"title": ""
},
{
"docid": "687caec27d44691a6aac75577b32eb81",
"text": "We present unsupervised approaches to the problem of modeling dialog acts in asynchronous conversations; i.e., conversations where participants collaborate with each other at different times. In particular, we investigate a graph-theoretic deterministic framework and two probabilistic conversation models (i.e., HMM and HMM+Mix) for modeling dialog acts in emails and forums. We train and test our conversation models on (a) temporal order and (b) graph-structural order of the datasets. Empirical evaluation suggests (i) the graph-theoretic framework that relies on lexical and structural similarity metrics is not the right model for this task, (ii) conversation models perform better on the graphstructural order than the temporal order of the datasets and (iii) HMM+Mix is a better conversation model than the simple HMM model.",
"title": ""
},
{
"docid": "a5f3b862a02fb26fa7b96ad0a10e762a",
"text": "Thesis for the degree of Doctor of Science (Technology) to be presented with due permission for the public examination and criticism in the Auditorium 1382 at High dynamic performance of an electric motor is a fundamental prerequisite in motion control applications, also known as servo drives. Recent developments in the field of microprocessors and power electronics have enabled faster and faster movements with an electric motor. In such a dynamically demanding application, the dimensioning of the motor differs substantially from the industrial motor design, where feasible characteristics of the motor are for example high efficiency, a high power factor, and a low price. In motion control instead, such characteristics as high overloading capability, high-speed operation, high torque density and low inertia are required. The thesis investigates how the dimensioning of a high-performance servomotor differs from the dimensioning of industrial motors. The two most common servomotor types are examined; an induction motor and a permanent magnet synchronous motor. The suitability of these two motor types in dynamically demanding servo applications is assessed, and the design aspects that optimize the servo characteristics of the motors are analyzed. Operating characteristics of a high performance motor are studied, and some methods for improvements are suggested. The main focus is on the induction machine, which is frequently compared to the permanent magnet synchronous motor. A 4 kW prototype induction motor was designed and manufactured for the verification of the simulation results in the laboratory conditions. Also a dynamic simulation model for estimating the thermal behaviour of the induction motor in servo applications was constructed. The accuracy of the model was improved by coupling it with the electromagnetic motor model in order to take into account the variations in the motor electromagnetic characteristics due to the temperature rise.",
"title": ""
},
{
"docid": "21c15eb5420a7345cc2900f076b15ca1",
"text": "Prokaryotic CRISPR-Cas genomic loci encode RNA-mediated adaptive immune systems that bear some functional similarities with eukaryotic RNA interference. Acquired and heritable immunity against bacteriophage and plasmids begins with integration of ∼30 base pair foreign DNA sequences into the host genome. CRISPR-derived transcripts assemble with CRISPR-associated (Cas) proteins to target complementary nucleic acids for degradation. Here we review recent advances in the structural biology of these targeting complexes, with a focus on structural studies of the multisubunit Type I CRISPR RNA-guided surveillance and the Cas9 DNA endonuclease found in Type II CRISPR-Cas systems. These complexes have distinct structures that are each capable of site-specific double-stranded DNA binding and local helix unwinding.",
"title": ""
},
{
"docid": "7f52cc4e9477147a7eb741222fb96637",
"text": "This paper describes AquaOptical, an underwater optical communication system. Three optical modems have been developed: a long range system, a short range system, and a hybrid. We describe their hardware and software architectures and highlight trade-offs. We present pool and ocean experiments with each system. In clear water AquaOptical was tested to achieve a data rate of 1.2Mbit/sec at distances up to 30m. The system was not tested beyond 30m. In water with visibility estimated at 3m AquaOptical achieved communication at data rates of 0.6Mbit/sec at distances up to 9m.",
"title": ""
},
{
"docid": "7c062c640e98b8186f6d4f4fe1ff80b5",
"text": "As an extension for Internet of Things (IoT), Internet of Vehicles (IoV) achieves unified management in smart transportation area. With the development of IoV, an increasing number of vehicles are connected to the network. Large scale IoV collects data from different places and various attributes, which conform with heterogeneous nature of big data in size, volume, and dimensionality. Big data collection between vehicle and application platform becomes more and more frequent through various communication technologies, which causes evolving security attack. However, the existing protocols in IoT cannot be directly applied in big data collection in large scale IoV. The dynamic network structure and growing amount of vehicle nodes increases the complexity and necessary of the secure mechanism. In this paper, a secure mechanism for big data collection in large scale IoV is proposed for improved security performance and efficiency. To begin with, vehicles need to register in the big data center to connect into the network. Afterward, vehicles associate with big data center via mutual authentication and single sign-on algorithm. Two different secure protocols are proposed for business data and confidential data collection. The collected big data is stored securely using distributed storage. The discussion and performance evaluation result shows the security and efficiency of the proposed secure mechanism.",
"title": ""
},
{
"docid": "37ad695a33cd19b664788964653d81b0",
"text": "Commonsense reasoning and probabilistic planning are two of the most important research areas in artificial intelligence. This paper focuses on Integrated commonsense Reasoning and probabilistic Planning (IRP) problems. On one hand, commonsense reasoning algorithms aim at drawing conclusions using structured knowledge that is typically provided in a declarative way. On the other hand, probabilistic planning algorithms aim at generating an action policy that can be used for action selection under uncertainty. Intuitively, reasoning and planning techniques are good at “understanding the world” and “accomplishing the task” respectively. This paper discusses the complementary features of the two computing paradigms, presents the (potential) advantages of their integration, and summarizes existing research on this topic.",
"title": ""
},
{
"docid": "8b9143a6345b38fd8a15b86756f75a1f",
"text": "A 6.78 MHz resonant wireless power transfer (WPT) system with a 5 W fully integrated power receiver is presented. A conventional low-dropout (LDO) linear regulator supplies power for operating the circuit in the power receiver. However, as the required operating current increases, the power consumption of the LDO regulator increases, which degrades the power efficiency. In order to increase the power efficiency of the receiver, this work proposes a power supply switching circuit (PSSC). When operation starts, the PSSC changes the power source from the low-efficiency LDO regulator to the high-efficiency step-down DC–DC converter. The LDO regulator operates only for initialization. This chip has been fabricated using 0.18 μm high-voltage bipolar– CMOS–DMOS (double-diffused metal–oxide–semiconductor) (BCD) technology with a die area of 2.5 mm x 2.5 mm. A maximum power transfer efficiency of 81% is measured.",
"title": ""
},
{
"docid": "5a9b5313575208b0bdf8ffdbd4e271f5",
"text": "A new method for the design of predictive controllers for SISO systems is presented. The proposed technique allows uncertainties and constraints to be concluded in the design of the control law. The goal is to design, at each sample instant, a predictive feedback control law that minimizes a performance measure and guarantees of constraints are satisfied for a set of models that describes the system to be controlled. The predictive controller consists of a finite horizon parametric-optimization problem with an additional constraint over the manipulated variable behavior. This is an end-constraint based approach that ensures the exponential stability of the closed-loop system. The inclusion of this additional constraint, in the on-line optimization algorithm, enables robust stability properties to be demonstrated for the closedloop system. This is the case even though constraints and disturbances are present. Finally, simulation results are presented using a nonlinear continuous stirred tank reactor model.",
"title": ""
},
{
"docid": "e0092f7964604f7adbe9f010bbac4871",
"text": "In the last decade, Web 2.0 services such as blogs, tweets, forums, chats, email etc. have been widely used as communication media, with very good results. Sharing knowledge is an important part of learning and enhancing skills. Furthermore, emotions may affect decisionmaking and individual behavior. Bitcoin, a decentralized electronic currency system, represents a radical change in financial systems, attracting a large number of users and a lot of media attention. In this work, we investigated if the spread of the Bitcoin’s price is related to the volumes of tweets or Web Search media results. We compared trends of price with Google Trends data, volume of tweets and particularly with those that express a positive sentiment. We found significant cross correlation values, especially between Bitcoin price and Google Trends data, arguing our initial idea based on studies about trends in stock and goods market.",
"title": ""
},
{
"docid": "ff5c993fd071b31b6f639d1f64ce28b0",
"text": "We show that explicit pragmatic inference aids in correctly generating and following natural language instructions for complex, sequential tasks. Our pragmatics-enabled models reason about why speakers produce certain instructions, and about how listeners will react upon hearing them. Like previous pragmatic models, we use learned base listener and speaker models to build a pragmatic speaker that uses the base listener to simulate the interpretation of candidate descriptions, and a pragmatic listener that reasons counterfactually about alternative descriptions. We extend these models to tasks with sequential structure. Evaluation of language generation and interpretation shows that pragmatic inference improves state-of-the-art listener models (at correctly interpreting human instructions) and speaker models (at producing instructions correctly interpreted by humans) in diverse settings.",
"title": ""
},
{
"docid": "9dadd96558791417495a5e1afa031851",
"text": "INTRODUCTION\nLittle information is available on malnutrition-related factors among school-aged children ≥5 years in Ethiopia. This study describes the prevalence of stunting and thinness and their related factors in Libo Kemkem and Fogera, Amhara Regional State and assesses differences between urban and rural areas.\n\n\nMETHODS\nIn this cross-sectional study, anthropometrics and individual and household characteristics data were collected from 886 children. Height-for-age z-score for stunting and body-mass-index-for-age z-score for thinness were computed. Dietary data were collected through a 24-hour recall. Bivariate and backward stepwise multivariable statistical methods were employed to assess malnutrition-associated factors in rural and urban communities.\n\n\nRESULTS\nThe prevalence of stunting among school-aged children was 42.7% in rural areas and 29.2% in urban areas, while the corresponding figures for thinness were 21.6% and 20.8%. Age differences were significant in both strata. In the rural setting, fever in the previous 2 weeks (OR: 1.62; 95% CI: 1.23-2.32), consumption of food from animal sources (OR: 0.51; 95% CI: 0.29-0.91) and consumption of the family's own cattle products (OR: 0.50; 95% CI: 0.27-0.93), among others factors were significantly associated with stunting, while in the urban setting, only age (OR: 4.62; 95% CI: 2.09-10.21) and years of schooling of the person in charge of food preparation were significant (OR: 0.88; 95% CI: 0.79-0.97). Thinness was statistically associated with number of children living in the house (OR: 1.28; 95% CI: 1.03-1.60) and family rice cultivation (OR: 0.64; 95% CI: 0.41-0.99) in the rural setting, and with consumption of food from animal sources (OR: 0.26; 95% CI: 0.10-0.67) and literacy of head of household (OR: 0.24; 95% CI: 0.09-0.65) in the urban setting.\n\n\nCONCLUSION\nThe prevalence of stunting was significantly higher in rural areas, whereas no significant differences were observed for thinness. Various factors were associated with one or both types of malnutrition, and varied by type of setting. To effectively tackle malnutrition, nutritional programs should be oriented to local needs.",
"title": ""
},
{
"docid": "00d7c524d4f56cbee795914c00739d12",
"text": "Computational Thinking is an essential skill for all students in the 21st Century. A fundamental question is how can we create computer affordances to empower novice teachers and students, in a variety of STEM and art disciplines, to think computationally while avoiding difficult overhead emerging from traditional coding? Over the last 20 years we have iteratively developed tools that aim to support computational thinking. As these tools evolved a philosophy emerged to support Computational Thinking by joining human abilities with computer affordances. Chief among these findings is that supporting Computational Thinking is much more than making coding accessible. Computational Thinking Tools aim to minimize coding overhead by supporting users through three fundamental stages of the Computational Thinking development cycle: problem formulation, solution expression, and solution execution/evaluation.",
"title": ""
}
] |
scidocsrr
|
f9b7d215e550e185353cf679080a888b
|
Interaction-aware occupancy prediction of road vehicles
|
[
{
"docid": "fb8518678126415b58f1b934235ccc79",
"text": "One significant barrier in introducing autonomous driving is the liability issue of a collision; e.g. when two autonomous vehicles collide, it is unclear which vehicle should be held accountable. To solve this issue, we view traffic rules from legal texts as requirements for autonomous vehicles. If we can prove that an autonomous vehicle always satisfies these requirements during its operation, then it cannot be held responsible in a collision. We present our approach by formalising a subset of traffic rules from the Vienna Convention on Road Traffic for highway scenarios in Isabelle/HOL.",
"title": ""
}
] |
[
{
"docid": "854d06ba08492ad68ea96c73908f81ca",
"text": "We describe Swapout, a new stochastic training method, that outperforms ResNets of identical network structure yielding impressive results on CIFAR-10 and CIFAR100. Swapout samples from a rich set of architectures including dropout [20], stochastic depth [7] and residual architectures [5, 6] as special cases. When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, similar to dropout, but also across network layers. We conjecture that swapout achieves strong regularization by implicitly tying the parameters across layers. When viewed as an ensemble training method, it samples a much richer set of architectures than existing methods such as dropout or stochastic depth. We propose a parameterization that reveals connections to exiting architectures and suggests a much richer set of architectures to be explored. We show that our formulation suggests an efficient training method and validate our conclusions on CIFAR-10 and CIFAR-100 matching state of the art accuracy. Remarkably, our 32 layer wider model performs similar to a 1001 layer ResNet model.",
"title": ""
},
{
"docid": "c011b2924151df9c4e90865d8ab8d856",
"text": "The growing demand for food poses major challenges to humankind. We have to safeguard both biodiversity and arable land for future agricultural food production, and we need to protect genetic diversity to safeguard ecosystem resilience. We must produce more food with less input, while deploying every effort to minimize risk. Agricultural sustainability is no longer optional but mandatory. There is still an on-going debate among researchers and in the media on the best strategy to keep pace with global population growth and increasing food demand. One strategy favors the use of genetically modified (GM) crops, while another strategy focuses on agricultural biodiversity. Here, we discuss two obstacles to sustainable agriculture solutions. The first obstacle is the claim that genetically modified crops are necessary if we are to secure food production within the next decades. This claim has no scientific support, but is rather a reflection of corporate interests. The second obstacle is the resultant shortage of research funds for agrobiodiversity solutions in comparison with funding for research in genetic modification of crops. Favoring biodiversity does not exclude any future biotechnological contributions, but favoring biotechnology threatens future biodiversity resources. An objective review of current knowledge places GM crops far down the list of potential solutions in the coming decades. We conclude that much of the research funding currently available for the development of GM crops would be much better spent in other research areas of plant science, e.g., nutrition, policy research, governance, and solutions close to local market conditions if the goal is to provide sufficient food for the world’s growing population in a sustainable way.",
"title": ""
},
{
"docid": "ee8ac41750c7d1545af54e812d7f2d9c",
"text": "The demand for more sophisticated Location-Based Services (LBS) in terms of applications variety and accuracy is tripling every year since the emergence of the smartphone a few years ago. Equally, smartphone manufacturers are mounting several wireless communication and localization technologies, inertial sensors as well as powerful processing capability, to cater to such LBS applications. A hybrid of wireless technologies is needed to provide seamless localization solutions and to improve accuracy, to reduce time to fix, and to reduce power consumption. The review of localization techniques/technologies of this emerging field is therefore important. This article reviews the recent research-oriented and commercial localization solutions on smartphones. The focus of this article is on the implementation challenges associated with utilizing these positioning solutions on Android-based smartphones. Furthermore, the taxonomy of smartphone-location techniques is highlighted with a special focus on the detail of each technique and its hybridization. The article compares the indoor localization techniques based on accuracy, utilized wireless technology, overhead, and localization technique used. The pursuit of achieving ubiquitous localization outdoors and indoors for critical LBS applications such as security and safety shall dominate future research efforts.",
"title": ""
},
{
"docid": "8ed2bb129f08657b896f5033c481db8f",
"text": "simple and fast reflectional symmetry detection algorithm has been developed in this Apaper. The algorithm employs only the original gray scale image and the gradient information of the image, and it is able to detect multiple reflectional symmetry axes of an object in the image. The directions of the symmetry axes are obtained from the gradient orientation histogram of the input gray scale image by using the Fourier method. Both synthetic and real images have been tested using the proposed algorithm.",
"title": ""
},
{
"docid": "3105a48f0b8e45857e8d48e26b258e04",
"text": "Dominated by the behavioral science approach for a long time, information systems research increasingly acknowledges design science as a complementary approach. While primarily information systems instantiations, but also constructs and models have been discussed quite comprehensively, the design of methods is addressed rarely. But methods appear to be of utmost importance particularly for organizational engineering. This paper justifies method construction as a core approach to organizational engineering. Based on a discussion of fundamental scientific positions in general and approaches to information systems research in particular, appropriate conceptualizations of 'method' and 'method construction' are presented. These conceptualizations are then discussed regarding their capability of supporting organizational engineering. Our analysis is located on a meta level: Method construction is conceptualized and integrated from a large number of references. Method instantiations or method engineering approaches however are only referenced and not described in detail.",
"title": ""
},
{
"docid": "4147fee030667122923f420ab55e38f7",
"text": "In this paper we propose a replacement algorithm, SF-LRU (second chance-frequency - least recently used) that combines the LRU (least recently used) and the LFU (least frequently used) using the second chance concept. A comprehensive comparison is made between our algorithm and both LRU and LFU algorithms. Experimental results show that the SF-LRU significantly reduces the number of cache misses compared the other two algorithms. Simulation results show that our algorithm can provide a maximum value of approximately 6.3% improvement in the miss ratio over the LRU algorithm in data cache and approximately 9.3% improvement in miss ratio in instruction cache. This performance improvement is attributed to the fact that our algorithm provides a second chance to the block that may be deleted according to LRU's rules. This is done by comparing the frequency of the block with the block next to it in the set.",
"title": ""
},
{
"docid": "5a46d347e83aec7624dde84ecdd5302c",
"text": "This paper presents a new algorithm to automatically solve algebra word problems. Our algorithm solves a word problem via analyzing a hypothesis space containing all possible equation systems generated by assigning the numbers in the word problem into a set of equation system templates extracted from the training data. To obtain a robust decision surface, we train a log-linear model to make the margin between the correct assignments and the false ones as large as possible. This results in a quadratic programming (QP) problem which can be efficiently solved. Experimental results show that our algorithm achieves 79.7% accuracy, about 10% higher than the state-of-the-art baseline (Kushman et al., 2014).",
"title": ""
},
{
"docid": "d0e5ddcc0aa85ba6a3a18796c335dcd2",
"text": "A novel planar end-fire circularly polarized (CP) complementary Yagi array antenna is proposed. The antenna has a compact and complementary structure, and exhibits excellent properties (low profile, single feed, broadband, high gain, and CP radiation). It is based on a compact combination of a pair of complementary Yagi arrays with a common driven element. In the complementary structure, the vertical polarization is contributed by a microstrip patch Yagi array, while the horizontal polarization is yielded by a strip dipole Yagi array. With the combination of the two orthogonally polarized Yagi arrays, a CP antenna with high gain and wide bandwidth is obtained. With a profile of <inline-formula> <tex-math notation=\"LaTeX\">$0.05\\lambda _{\\mathrm{0}}$ </tex-math></inline-formula> (3 mm), the antenna has a gain of about 8 dBic, an impedance bandwidth (<inline-formula> <tex-math notation=\"LaTeX\">$\\vert S_{11}\\vert < -10 $ </tex-math></inline-formula> dB) of 13.09% (4.57–5.21 GHz) and a 3-dB axial-ratio bandwidth of 10.51% (4.69–5.21 GHz).",
"title": ""
},
{
"docid": "b19cbe5e99f2edb701ba22faa7406073",
"text": "There are many wireless monitoring and control applications for industrial and home markets which require longer battery life, lower data rates and less complexity than available from existing wireless standards. These standards provide higher data rates at the expense of power consumption, application complexity and cost. What these markets need, in many cases, is a standardsbased wireless technology having the performance characteristics that closely meet the requirements for reliability, security, low power and low cost. This standards-based, interoperable wireless technology will address the unique needs of low data rate wireless control and sensor-based networks.",
"title": ""
},
{
"docid": "701d822e68ed2c74670f6a7a8d06631a",
"text": "With the increasing dependence of enterprises on IT, and with the widely spreading use of e-business, IT governance is attracting increasing worldwide attention. A proper IT governance would promote enterprise performance through intelligent and efficient utilization of IT. In addition, standard IT governance practices would provide a suitable open environment for e-business that provides compatibility for inter-enterprise interaction. This review is concerned with introducing the current state of IT governance, in four main steps. First, the review identifies what is meant by IT governance, and presents the main organizations concerned with its development, namely: ISACA (Information Systems Audit and Control Association) and ITGI (Information Technology Governance Institute). Secondly, the review highlights COBIT (Control Objectives for Information and related Technologies) the widely acknowledged IT governance framework, produced by ITGI. Thirdly, the current state of COBIT use is addressed using a recent global survey. Finally, comments and recommendations on the future development of IT governance are concluded. Understanding IT governance The word governance brings attention to the more familiar word government. To Webster's dictionary [1], both are of the same meaning. The dictionary defines the word government as \"the individual or body that exercises administrative power\". The word is known to be of Greek origin, and means \"to steer\" [2]. Currently, the two words are usually used to mean two related, but different, meanings. While the word government is defined as",
"title": ""
},
{
"docid": "a532dcd3dbaf3ba784d1f5f8623b600c",
"text": "Our long term interest is in building inference algorithms capable of answering questions and producing human-readable explanations by aggregating information from multiple sources and knowledge bases. Currently information aggregation (also referred to as “multi-hop inference”) is challenging for more than two facts due to “semantic drift”, or the tendency for natural language inference algorithms to quickly move off-topic when assembling long chains of knowledge. In this paper we explore the possibility of generating large explanations with an average of six facts by automatically extracting common explanatory patterns from a corpus of manually authored elementary science explanations represented as lexically-connected explanation graphs grounded in a semi-structured knowledge base of tables. We empirically demonstrate that there are sufficient common explanatory patterns in this corpus that it is possible in principle to reconstruct unseen explanation graphs by merging multiple explanatory patterns, then adapting and/or adding to their knowledge. This may ultimately provide a mechanism to allow inference algorithms to surpass the two-fact “aggregation horizon” in practice by using common explanatory patterns as constraints to limit the search space during information aggregation.",
"title": ""
},
{
"docid": "9b96a97426917b18dab401423e777b92",
"text": "Anatomical and biophysical modeling of left atrium (LA) and proximal pulmonary veins (PPVs) is important for clinical management of several cardiac diseases. Magnetic resonance imaging (MRI) allows qualitative assessment of LA and PPVs through visualization. However, there is a strong need for an advanced image segmentation method to be applied to cardiac MRI for quantitative analysis of LA and PPVs. In this study, we address this unmet clinical need by exploring a new deep learning-based segmentation strategy for quantification of LA and PPVs with high accuracy and heightened efficiency. Our approach is based on a multi-view convolutional neural network (CNN) with an adaptive fusion strategy and a new loss function that allows fast and more accurate convergence of the backpropagation based optimization. After training our network from scratch by using more than 60K 2D MRI images (slices), we have evaluated our segmentation strategy to the STACOM 2013 cardiac segmentation challenge benchmark. Qualitative and quantitative evaluations, obtained from the segmentation challenge, indicate that the proposed method achieved the state-of-the-art sensitivity (90%), specificity (99%), precision (94%), and efficiency levels (10 seconds in GPU, and 7.5 minutes in CPU).",
"title": ""
},
{
"docid": "78c567177285309ca3100fb15d6ee113",
"text": "The ability to discover the topic of a large set of text documents using relevant keyphrases is usually regarded as a very tedious task if done by hand. Automatic keyphrase extraction from multi-document data sets or text clusters provides a very compact summary of the contents of the clusters, which often helps in locating information easily. We introduce an algorithm for topic discovery using keyphrase extraction from multi-document sets and clusters based on frequent and significant shared phrases between documents. The keyphrases extracted by the algorithm are highly accurate and fit the cluster topic. The algorithm is independent of the domain of the documents. Subjective as well as quantitative evaluation show that the algorithm outperforms keyword-based cluster-labeling algorithms, and is capable of accurately discovering the topic, and often ranking it in the top one or two extracted keyphrases.",
"title": ""
},
{
"docid": "fb3018d852c2a7baf96fb4fb1233b5e5",
"text": "The term twin spotting refers to phenotypes characterized by the spatial and temporal co-occurrence of two (or more) different nevi arranged in variable cutaneous patterns, and can be associated with extra-cutaneous anomalies. Several examples of twin spotting have been described in humans including nevus vascularis mixtus, cutis tricolor, lesions of overgrowth, and deficient growth in Proteus and Elattoproteus syndromes, epidermolytic hyperkeratosis of Brocq, and the so-called phacomatoses pigmentovascularis and pigmentokeratotica. We report on a 28-year-old man and a 15-year-old girl, who presented with a previously unrecognized association of paired cutaneous vascular nevi of the telangiectaticus and anemicus types (naevus vascularis mixtus) distributed in a mosaic pattern on the face (in both patients) and over the entire body (in the man) and a complex brain malformation (in both patients) consisting of cerebral hemiatrophy, hypoplasia of the cerebral vessels and homolateral hypertrophy of the skull and sinuses (known as Dyke-Davidoff-Masson malformation). Both patients had facial asymmetry and the young man had facial dysmorphism, seizures with EEG anomalies, hemiplegia, insulin-dependent diabetes mellitus (IDDM), autoimmune thyroiditis, a large hepatic cavernous vascular malformation, and left Legg-Calvé-Perthes disease (LCPD) [LCPD-like presentation]. Array-CGH analysis and mutation analysis of the RASA1 gene were normal in both patients.",
"title": ""
},
{
"docid": "e55b0182c47c7aba4d65fac1ad3a3fa2",
"text": "117 © 2009 EMDR International Association DOI: 10.1891/1933-3196.3.3.117 “Experiencing trauma is an essential part of being human; history is written in blood” (van der Kolk & McFarlane, 1996, p. 3). As humans, however, we do have an extraordinary ability to adapt to trauma, and resilience is our most common response (Bonanno, 2005). Nonetheless, traumatic experiences can alter one’s social, psychological, and biological equilibrium, and for years memories of the event can taint experiences in the present. Despite advances in our knowledge of posttraumatic stress disorder (PTSD) and the development of psychosocial treatments, almost half of those who engage in treatment for PTSD fail to fully recover (Bradley, Greene, Russ, Dutra, & Westen, 2005). Furthermore, no theory as yet provides an adequate account of all the complex phenomena and processes involved in PTSD, and our understanding of the mechanisms that underlie effective treatment, such as eye movement desensitization and reprocessing (EMDR) and exposure therapy remains unclear. Historical Overview of PTSD",
"title": ""
},
{
"docid": "63ab6c486aa8025c38bd5b7eadb68cfa",
"text": "The demands on a natural language understanding system used for spoken language differ somewhat from the demands of text processing. For processing spoken language, there is a tension between the system being as robust as necessary, and as constrained as possible. The robust system will a t tempt to find as sensible an interpretation as possible, even in the presence of performance errors by the speaker, or recognition errors by the speech recognizer. In contrast, in order to provide language constraints to a speech recognizer, a system should be able to detect that a recognized string is not a sentence of English, and disprefer that recognition hypothesis from the speech recognizer. If the coupling is to be tight, with parsing and recognition interleaved, then the parser should be able to enforce as many constraints as possible for partial utterances. The approach taken in Gemini is to tightly constrain language recognition to limit overgeneration, but to extend the language analysis to recognize certain characteristic patterns of spoken utterances (but not generally thought of as part of grammar) and to recognize specific types of performance errors by the speaker.",
"title": ""
},
{
"docid": "7bf8b7e4698bd0ef951879f68083fd7e",
"text": "Brain injury induced by fluid percussion in rats caused a marked elevation in extracellular glutamate and aspartate adjacent to the trauma site. This increase in excitatory amino acids was related to the severity of the injury and was associated with a reduction in cellular bioenergetic state and intracellular free magnesium. Treatment with the noncompetitive N-methyl-D-aspartate (NMDA) antagonist dextrophan or the competitive antagonist 3-(2-carboxypiperazin-4-yl)propyl-1-phosphonic acid limited the resultant neurological dysfunction; dextrorphan treatment also improved the bioenergetic state after trauma and increased the intracellular free magnesium. Thus, excitatory amino acids contribute to delayed tissue damage after brain trauma; NMDA antagonists may be of benefit in treating acute head injury.",
"title": ""
},
{
"docid": "3c83e3b5484cada8b2cfe8943c9ce5f7",
"text": "Automatic human gesture recognition from camera images is an interesting topic for developing intelligent vision systems. In this paper, we propose a convolution neural network (CNN) method to recognize hand gestures of human task activities from a camera image. To achieve the robustness performance, the skin model and the calibration of hand position and orientation are applied to obtain the training and testing data for the CNN. Since the light condition seriously affects the skin color, we adopt a Gaussian Mixture model (GMM) to train the skin model which is used to robustly filter out non-skin colors of an image. The calibration of hand position and orientation aims at translating and rotating the hand image to a neutral pose. Then the calibrated images are used to train the CNN. In our experiment, we provided a validation of the proposed method on recognizing human gestures which shows robust results with various hand positions and orientations and light conditions. Our experimental evaluation of seven subjects performing seven hand gestures with average recognition accuracies around 95.96% shows the feasibility and reliability of the proposed method.",
"title": ""
},
{
"docid": "e53678707c57dce8d2e91afa04e99aaa",
"text": "Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces.",
"title": ""
},
{
"docid": "5d82469913da465c7445359dcdbbc89b",
"text": "There is increasing interest in using synthetic aperture radar (SAR) images in automated target recognition and decision-making tasks. The success of such tasks depends on how well the reconstructed SAR images exhibit certain features of the underlying scene. Based on the observation that typical underlying scenes usually exhibit sparsity in terms of such features, this paper presents an image formation method that formulates the SAR imaging problem as a sparse signal representation problem. For problems of complex-valued nature, such as SAR, a key challenge is how to choose the dictionary and the representation scheme for effective sparse representation. Since features of the SAR reflectivity magnitude are usually of interest, the approach is designed to sparsely represent the magnitude of the complex-valued scattered field. This turns the image reconstruction problem into a joint optimisation problem over the representation of magnitude and phase of the underlying field reflectivities. The authors develop the mathematical framework for this method and propose an iterative solution for the corresponding joint optimisation problem. The experimental results demonstrate the superiority of this method over previous approaches in terms of both producing high-quality SAR images and exhibiting robustness to uncertain or limited data.",
"title": ""
}
] |
scidocsrr
|
62bb1b4ceff56b80231506c19332b423
|
Uniqueness of medical data mining
|
[
{
"docid": "edcf1cb4d09e0da19c917eab9eab3b23",
"text": "The paper describes a computerized process of myocardial perfusion diagnosis from cardiac single proton emission computed tomography (SPECT) images using data mining and knowledge discovery approach. We use a six-step knowledge discovery process. A database consisting of 267 cleaned patient SPECT images (about 3000 2D images), accompanied by clinical information and physician interpretation was created first. Then, a new user-friendly algorithm for computerizing the diagnostic process was designed and implemented. SPECT images were processed to extract a set of features, and then explicit rules were generated, using inductive machine learning and heuristic approaches to mimic cardiologist's diagnosis. The system is able to provide a set of computer diagnoses for cardiac SPECT studies, and can be used as a diagnostic tool by a cardiologist. The achieved results are encouraging because of the high correctness of diagnoses.",
"title": ""
}
] |
[
{
"docid": "a12b30d8c4180d7f3957f3a27c61e59b",
"text": "A novel method of feature selection combined with sample selection is proposed to select discriminant features in this paper. Based on support vector machine trained on training set, the samples excluding the misclassified samples and support vector samples are used to select informative features during the procedure of recursive feature selection. The feature selection method is applied to seven datasets, and the classification results of the selected discriminant features show that the method is effective and reliable for selecting features with high classification information.",
"title": ""
},
{
"docid": "cbb6bac245862ed0265f6d32e182df92",
"text": "With the explosion of online communication and publication, texts become obtainable via forums, chat messages, blogs, book reviews and movie reviews. Usually, these texts are much short and noisy without sufficient statistical signals and enough information for a good semantic analysis. Traditional natural language processing methods such as Bow-of-Word (BOW) based probabilistic latent semantic models fail to achieve high performance due to the short text environment. Recent researches have focused on the correlations between words, i.e., term dependencies, which could be helpful for mining latent semantics hidden in short texts and help people to understand them. Long short-term memory (LSTM) network can capture term dependencies and is able to remember the information for long periods of time. LSTM has been widely used and has obtained promising results in variants of problems of understanding latent semantics of texts. At the same time, by analyzing the texts, we find that a number of keywords contribute greatly to the semantics of the texts. In this paper, we establish a keyword vocabulary and propose an LSTM-based model that is sensitive to the words in the vocabulary; hence, the keywords leverage the semantics of the full document. The proposed model is evaluated in a short-text sentiment analysis task on two datasets: IMDB and SemEval-2016, respectively. Experimental results demonstrate that our model outperforms the baseline LSTM by 1%~2% in terms of accuracy and is effective with significant performance enhancement over several non-recurrent neural network latent semantic models (especially in dealing with short texts). We also incorporate the idea into a variant of LSTM named the gated recurrent unit (GRU) model and achieve good performance, which proves that our method is general enough to improve different deep learning models.",
"title": ""
},
{
"docid": "b209b606f09888157098a3d6054df148",
"text": "A major challenge in paraphrase research is the lack of parallel corpora. In this paper, we present a new method to collect large-scale sentential paraphrases from Twitter by linking tweets through shared URLs. The main advantage of our method is its simplicity, as it gets rid of the classifier or human in the loop needed to select data before annotation and subsequent application of paraphrase identification algorithms in the previous work. We present the largest human-labeled paraphrase corpus to date of 51,524 sentence pairs and the first cross-domain benchmarking for automatic paraphrase identification. In addition, we show that more than 30,000 new sentential paraphrases can be easily and continuously captured every month at ∼70% precision, and demonstrate their utility for downstream NLP tasks through phrasal paraphrase extraction. We make our code and data freely available.1",
"title": ""
},
{
"docid": "2b40c6f6a9fc488524c23e11cd57a00b",
"text": "An overview of the basics of metaphorical thought and language from the perspective of Neurocognition, the integrated interdisciplinary study of how conceptual thought and language work in the brain. The paper outlines a theory of metaphor circuitry and discusses how everyday reason makes use of embodied metaphor circuitry.",
"title": ""
},
{
"docid": "d1041afcb50a490034740add2cce3f0d",
"text": "Inverse synthetic aperture radar imaging of moving targets with a stepped frequency waveform presents unique challenges. Intra-step target motion introduces phase discontinuities between frequency bands, which in turn produce degraded range side lobes. Frequency stitching of the stepped-frequency waveform to emulate a contiguous bandwidth can dramatically reduce the effective pulse repetition frequency, which then may impact the maximize target size that can be unambiguously measured and imaged via ISAR. This paper analyzes these effects and validates results via simulated data.",
"title": ""
},
{
"docid": "41c35407c55878910f5dfc2dfe083955",
"text": "This work deals with several aspects concerning the formal verification of SN P systems and the computing power of some variants. A methodology based on the information given by the transition diagram associated with an SN P system is presented. The analysis of the diagram cycles codifies invariants formulae which enable us to establish the soundness and completeness of the system with respect to the problem it tries to resolve. We also study the universality of asynchronous and sequential SN P systems and the capability these models have to generate certain classes of languages. Further, by making a slight modification to the standard SN P systems, we introduce a new variant of SN P systems with a special I/O mode, called SN P modules, and study their computing power. It is demonstrated that, as string language acceptors and transducers, SN P modules can simulate several types of computing devices such as finite automata, a-finite transducers, and systolic trellis automata.",
"title": ""
},
{
"docid": "259a530bcd24668e863b69559e41e425",
"text": "Perceptual quality assessment of 3D triangular meshes is crucial for a variety of applications. In this paper, we present a new objective metric for assessing the visual difference between a reference triangular mesh and its distorted version produced by lossy operations, such as noise addition, simplification, compression and watermarking. The proposed metric is based on the measurement of the distance between curvature tensors of the two meshes under comparison. Our algorithm uses not only tensor eigenvalues (i.e., curvature amplitudes) but also tensor eigenvectors (i.e., principal curvature directions) to derive a perceptually-oriented tensor distance. The proposed metric also accounts for the visual masking effect of the human visual system, through a roughness-based weighting of the local tensor distance. A final score that reflects the visual difference between two meshes is obtained via a Minkowski pooling of the weighted local tensor distances over the mesh surface. We validate the performance of our algorithm on four subjectively-rated visual mesh quality databases, and compare the proposed method with state-of-the-art objective metrics. Experimental results show that our approach achieves high correlation between objective scores and subjective assessments.",
"title": ""
},
{
"docid": "b5a4a32c7dceadfa4923057e426bf753",
"text": "It has been theorized that suicide behaviours amongst indigenous peoples may be an outcome of mass trauma experienced as a result of colonization. In Canada, qualitative evidence has suggested that the Indian Residential School System set in motion a cycle of trauma, with some survivors reporting subsequent abuse, suicide, and other related behaviours. It has been further postulated that the effects of trauma can also be passed inter-generationally. Today, there are four generations of Canadian First Nations residential school survivors who may have transmitted the trauma they experienced to their own children and grandchildren. No empirical study has ever been undertaken to demonstrate this dynamic. This study is therefore the first to investigate whether a direct or indirect exposure to Canada's residential school system is associated with trauma and suicide behaviour histories. Data were collected in 2002/2003 from a representative sample of Manitoba, Canada, First Nations adults (N = 2953), including residential (N = 611) and non-residential school attendees (N = 2342). Regression analyses showed that for residential school attendees negative experiences in residential school were associated with a history of abuse, and that this history and being of younger age was associated with a history of suicide thoughts, whereas abuse history only was associated with a history of suicide attempts. For First Nations adults who did not attend a residential school, we found that age 28-44, female sex, not having a partner, and having a parent or grandparent who attended a residential school was associated with a history of abuse. This history, along with age and having had a parent or grandparent who attended residential school was associated with a history of suicide thoughts and attempts. In conclusion, this is the first study to empirically demonstrate, at the population level, the mental health impact of the residential school system on survivors and their children.",
"title": ""
},
{
"docid": "86332184a278d13b0ec8c814c9d8bb04",
"text": "In this study, I analyze Bitcoin transaction data and build an economic model on Bitcoin traders incentives to decompose the Bitcoin price into a utility-driven component, a speculative component, and a friction component. The model I build extends the LDA (Latent-Dirichlet-Allocation) model, and I perform a paralleled collapsed Gibbs Sampling method to estimate the realized incentives of each individual trader at each time point. For post-estimation analysis, I look into major headline news to see which how information or rumor affects the different components of the Bitcoin price. The preliminary results show interesting patterns of trading and pricing in the Bitcoin market for the first time.",
"title": ""
},
{
"docid": "44b7ed6c8297b6f269c8b872b0fd6266",
"text": "vii",
"title": ""
},
{
"docid": "c5731d7290f1ab073c12bf67101a386a",
"text": "Convolutional neural networks have emerged as the leading method for the classification and segmentation of images. In some cases, it is desirable to focus the attention of the net on a specific region in the image; one such case is the recognition of the contents of transparent vessels, where the vessel region in the image is already known. This work presents a valve filter approach for focusing the attention of the net on a region of interest (ROI). In this approach, the ROI is inserted into the net as a binary map. The net uses a different set of convolution filters for the ROI and background image regions, resulting in a different set of features being extracted from each region. More accurately, for each filter used on the image, a corresponding valve filter exists that acts on the ROI map and determines the regions in which the corresponding image filter will be used. This valve filter effectively acts as a valve that inhibits specific features in different image regions according to the ROI map. In addition, a new data set for images of materials in glassware vessels in a chemistry laboratory setting is presented. This data set contains a thousand images with pixel-wise annotation according to categories ranging from filled and empty to the exact phase of the material inside the vessel. The results of the valve filter approach and fully convolutional neural nets (FCN) with no ROI input are compared based on this data set.",
"title": ""
},
{
"docid": "5625166c3e84059dd7b41d3c0e37e080",
"text": "External border surveillance is critical to the security of every state and the challenges it poses are changing and likely to intensify. Wireless sensor networks (WSN) are a low cost technology that provide an intelligence-led solution to effective continuous monitoring of large, busy, and complex landscapes. The linear network topology resulting from the structure of the monitored area raises challenges that have not been adequately addressed in the literature to date. In this paper, we identify an appropriate metric to measure the quality of WSN border crossing detection. Furthermore, we propose a method to calculate the required number of sensor nodes to deploy in order to achieve a specified level of coverage according to the chosen metric in a given belt region, while maintaining radio connectivity within the network. Then, we contribute a novel cross layer routing protocol, called levels division graph (LDG), designed specifically to address the communication needs and link reliability for topologically linear WSN applications. The performance of the proposed protocol is extensively evaluated in simulations using realistic conditions and parameters. LDG simulation results show significant performance gains when compared with its best rival in the literature, dynamic source routing (DSR). Compared with DSR, LDG improves the average end-to-end delays by up to 95%, packet delivery ratio by up to 20%, and throughput by up to 60%, while maintaining comparable performance in terms of normalized routing load and energy consumption.",
"title": ""
},
{
"docid": "5d5742db6d7a4c95451f071bf7841077",
"text": "Automatic detection of diseases is a growing field of interest, and machine learning in form of deep learning neural networks are frequently explored as a potential tool for the medical video analysis. To both improve the \"black box\"-understanding and assist in the administrative duties of writing an examination report, we release an automated multimedia reporting software dissecting the neural network to learn the intermediate analysis steps, i.e., we are adding a new level of understanding and explainability by looking into the deep learning algorithms decision processes. The presented open-source software can be used for easy retrieval and reuse of data for automatic report generation, comparisons, teaching and research. As an example, we use live colonoscopy as a use case which is the gold standard examination of the large bowel, commonly performed for clinical and screening purposes. The added information has potentially a large value, and reuse of the data for the automatic reporting may potentially save the doctors large amounts of time.",
"title": ""
},
{
"docid": "a0a9785ee7688a601e678b4b8d40cb91",
"text": "We present a light-weight machine learning tool for NLP research. The package supports operations on both discrete and dense vectors, facilitating implementation of linear models as well as neural models. It provides several basic layers which mainly aims for single-layer linear and non-linear transformations. By using these layers, we can conveniently implement linear models and simple neural models. Besides, this package also integrates several complex layers by composing those basic layers, such as RNN, Attention Pooling, LSTM and gated RNN. Those complex layers can be used to implement deep neural models directly.",
"title": ""
},
{
"docid": "c2d17d5a5db10efafa4e56a2b6cd7afa",
"text": "The main purpose of analyzing the social network data is to observe the behaviors and trends that are followed by people. How people interact with each other, what they usually share, what are their interests on social networks, so that analysts can focus new trends for the provision of those things which are of great interest for people so in this paper an easy approach of gathering and analyzing data through keyword based search in social networks is examined using NodeXL and data is gathered from twitter in which political trends have been analyzed. As a result it will be analyzed that, what people are focusing most in politics.",
"title": ""
},
{
"docid": "1329c0a07ac6993403a1d7c08ee9f54d",
"text": "This paper presents an authoring tool for building location-based mobile games, enhanced with augmented reality capabilities. These games are a subclass of pervasive games in which the gameplay evolves and progresses according to player's location. We have conducted a literature review on authoring tools and pervasive games to (i) collect the common scenarios of current location-based mobile games, and (ii) features of authoring tools for these games. Additionally, we have also used the focus groups methodology to find new scenarios for location-based mobile games, in particular regarding how augmented reality can be used in these games. Both literature review and focus groups provide us a set of requirements to design a software architecture for our authoring tool, a web-based application where games are created and executed. In our approach, games are designed as a set of missions that can be ordered or not. Players use mobile devices to perform these missions in order to complete each game. Our main objective is to provide a software solution to enable non-programmers users to design, build and run location-based mobile games. In order to evaluate our tool, we present a game design called \"Battle for Fortaleza\", and how this game is implemented in our solution.",
"title": ""
},
{
"docid": "78bf0b1d4065fd0e1740589c4e060c70",
"text": "This paper presents an efficient metric for quantifying the visual fidelity of natural images based on near-threshold and suprathreshold properties of human vision. The proposed metric, the visual signal-to-noise ratio (VSNR), operates via a two-stage approach. In the first stage, contrast thresholds for detection of distortions in the presence of natural images are computed via wavelet-based models of visual masking and visual summation in order to determine whether the distortions in the distorted image are visible. If the distortions are below the threshold of detection, the distorted image is deemed to be of perfect visual fidelity (VSNR = infin)and no further analysis is required. If the distortions are suprathreshold, a second stage is applied which operates based on the low-level visual property of perceived contrast, and the mid-level visual property of global precedence. These two properties are modeled as Euclidean distances in distortion-contrast space of a multiscale wavelet decomposition, and VSNR is computed based on a simple linear sum of these distances. The proposed VSNR metric is generally competitive with current metrics of visual fidelity; it is efficient both in terms of its low computational complexity and in terms of its low memory requirements; and it operates based on physical luminances and visual angle (rather than on digital pixel values and pixel-based dimensions) to accommodate different viewing conditions.",
"title": ""
},
{
"docid": "2b588e18ff6826bd9b077f539777a27a",
"text": "Big data phenomenon arises from the increasing number of data collected from various sources, including the internet. Big data is not only about the size or volume. Big data posses specific characteristics (volume, variety, velocity, and value - 4V) that make it difficult to manage from security point of view. The evolution of data to become big data rises another important issues about data security and its management. NIST defines guide for conducting risk assessments on data, including risk management process and risk assessment. This paper looks at NIST risk management guidance and determines whether the approach of this standard is applicable to big data by generally define the threat source, threat events, vulnerabilities, likelihood of occurence and impact. The result of this study will be a general framework defining security management on Big data.",
"title": ""
},
{
"docid": "5dfc0ec364055f79d19ee8cf0b0cfeff",
"text": "Cancer cachexia is a common problem among advanced cancer patients. A mixture of β-hydroxyl β-methyl butyrate, glutamine, and arginine (HMB/Arg/Gln) previously showed activity for increasing lean body mass (LBM) among patients with cancer cachexia. Therefore a phase III trial was implemented to confirm this activity. Four hundred seventy-two advanced cancer patients with between 2% and 10% weight loss were randomized to a mixture of β-hydroxyl β-methyl butyrate, glutamine, and arginine or an isonitrogenous, isocaloric control mixture taken twice a day for 8 weeks. Lean body mass was estimated by bioimpedance and skin-fold measurements. Body plethysmography was used when available. Weight, the Schwartz Fatigue Scale, and the Spitzer Quality of Life Scale were also measured. Only 37% of the patients completed protocol treatment. The majority of the patient loss was because of patient preference (45% of enrolled patients). However, loss of power was not an issue because of the planned large target sample size. Based on an intention to treat analysis, there was no statistically significant difference in the 8-week lean body mass between the two arms. The secondary endpoints were also not significantly different between the arms. Based on the results of the area under the curve (AUC) analysis, patients receiving HMB/Arg/Gln had a strong trend higher LBM throughout the study as measured by both bioimpedance (p = 0.08) and skin-fold measurements (p = 0.08). Among the subset of patients receiving concurrent chemotherapy, there were again no significant differences in the endpoints. The secondary endpoints were also not significantly different between the arms. This trial was unable to adequately test the ability of β-hydroxy β-methylbutyrate, glutamine, and arginine to reverse or prevent lean body mass wasting among cancer patients. Possible contributing factors beyond the efficacy of the intervention were the inability of patients to complete an 8-week course of treatment and return in a timely fashion for follow-up assessment, and because the patients may have only had weight loss possible not related to cachexia, but other causes of weight loss, such as decreased appetite. However, there was a strong trend towards an increased body mass among patients taking the Juven® compound using the secondary endpoint of AUC.",
"title": ""
},
{
"docid": "7bf8b7e4698bd0ef951879f68083fd7e",
"text": "Brain injury induced by fluid percussion in rats caused a marked elevation in extracellular glutamate and aspartate adjacent to the trauma site. This increase in excitatory amino acids was related to the severity of the injury and was associated with a reduction in cellular bioenergetic state and intracellular free magnesium. Treatment with the noncompetitive N-methyl-D-aspartate (NMDA) antagonist dextrophan or the competitive antagonist 3-(2-carboxypiperazin-4-yl)propyl-1-phosphonic acid limited the resultant neurological dysfunction; dextrorphan treatment also improved the bioenergetic state after trauma and increased the intracellular free magnesium. Thus, excitatory amino acids contribute to delayed tissue damage after brain trauma; NMDA antagonists may be of benefit in treating acute head injury.",
"title": ""
}
] |
scidocsrr
|
f177518fa8695384cb8ef7b0647e7236
|
Practical Byzantine Group Communication
|
[
{
"docid": "fc1c3291c631562a6d1b34d5b5ccd27e",
"text": "There are many methods for making a multicast protocol “reliable.” At one end of the spectrum, a reliable multicast protocol might offer tomicity guarantees, such as all-or-nothing delivery, delivery ordering, and perhaps additional properties such as virtually synchronous addressing. At the other are protocols that use local repair to overcome transient packet loss in the network, offering “best effort” reliability. Yet none of this prior work has treated stability of multicast delivery as a basic reliability property, such as might be needed in an internet radio, television, or conferencing application. This article looks at reliability with a new goal: development of a multicast protocol which is reliable in a sense that can be rigorously quantified and includes throughput stability guarantees. We characterize this new protocol as a “bimodal multicast” in reference to its reliability model, which corresponds to a family of bimodal probability distributions. Here, we introduce the protocol, provide a theoretical analysis of its behavior, review experimental results, and discuss some candidate applications. These confirm that bimodal multicast is reliable, scalable, and that the protocol provides remarkably stable delivery throughput.",
"title": ""
}
] |
[
{
"docid": "4bc65e3c420fae22b2b78de36b8b7bf3",
"text": "This paper presents a tuturial introduction to predictions of stock time series. The various approaches of technical and fundamental analysis is presented and the prediction problem is formulated as a special case of inductive learning. The problems with performance evaluation of near-random-walk processes are illustrated with examples together with guidelines for avoiding the risk of data-snooping. The connections to concepts like \"the bias/variance dilemma\", overtraining and model complexity are further covered. Existing benchmarks and testing metrics are surveyed and some new measures are introduced.",
"title": ""
},
{
"docid": "55631b81d46fc3dcaad8375176cb1c68",
"text": "UNLABELLED\nThe need for long-term retention to prevent post-treatment tooth movement is now widely accepted by orthodontists. This may be achieved with removable retainers or permanent bonded retainers. This article aims to provide simple guidance for the dentist on how to maintain and repair both removable and fixed retainers.\n\n\nCLINICAL RELEVANCE\nThe general dental practitioner is more likely to review patients over time and needs to be aware of the need for long-term retention and how to maintain and repair the retainers.",
"title": ""
},
{
"docid": "2f6b866048c302b93b7f4eaf0907c7e1",
"text": "This study aimed to determine differences in speech perception and subjective preference after upgrade from the FSP coding strategy to the FS4 or FS4p coding strategies. Subjects were tested at the point of upgrade (n=10), and again at 1-(n=10), 3-(n=8), 6-(n=8) and 12 months (n=8) after the upgrade to the FS4 or FS4p coding strategy. In between test intervals patients had to use the FS4 or FS4p strategy in everyday life. Primary outcome measures, chosen to best evaluate individual speech understanding, were the Freiburg Monosyllable Test in quiet, the Oldenburg Sentence Test (OLSA) in noise, and the Hochmair-Schulz-Moser (HSM) Sentence Test in noise. To measure subjective sound quality the Hearing Implant Sound Quality Index was used. Subjects with the FS4/FS4p strategy performed as well as subjects with the FSP coding strategy in the speech tests. The subjective perception of subjects showed that subjects perceived a ‘moderate’ or ‘poor’ auditory benefit with the FS4/FS4p coding strategy. Subjects with the FS4 or FS4p coding strategies perform well in everyday situations. Both coding strategies offer another tool to individualize the fitting of audio processors and grant access to satisfying sound quality and speech perception.",
"title": ""
},
{
"docid": "b03b41f27b3046156a922f858349d4ed",
"text": "Charophytes are macrophytic green algae, occurring in standing and running waters throughout the world. Species descriptions of charophytes are contradictive and different determination keys use various morphologic characters for species discrimination. Chara intermedia Braun, C. baltica Bruzelius and C. hispida Hartman are treated as three species by most existing determination keys, though their morphologic differentiation is based on different characteristics. Amplified fragment length polymorphism (AFLP) was used to detect genetically homogenous groups within the C. intermedia-C. baltica-C. hispida-cluster, by the analysis of 122 C. intermedia, C. baltica and C. hispida individuals from central and northern Europe. C. hispida clustered in a distinct genetic group in the AFLP analysis and could be determined morphologically by its aulacanthous cortification. However, for C. intermedia and C. baltica no single morphologic character was found that differentiated the two genetic groups, thus C. intermedia and C. baltica are considered as cryptic species. All C. intermedia specimen examined came from freshwater habitats, whereas the second group, C. baltica, grew in brackish water. We conclude that the species differentiation between C. intermedia and C. baltica, which is assumed to be reflected by the genetic discrimination groups, corresponds more with ecological (salinity preference) than morphologic characteristics. Based on the genetic analysis three differing colonization models of the Baltic Sea and the Swedish lakes with C. baltica and C. intermedia were discussed. As samples of C. intermedia and C. baltica have approximately the same Jaccard coefficient for genetic similarity, we suggest that C. baltica colonized the Baltic Sea after the last glacial maximum from refugia along the Atlantic and North Sea coasts. Based on the similarity of C. intermedia intermediate individuals of Central Europe and Sweden we assume a colonization of the Swedish lakes from central Europe.",
"title": ""
},
{
"docid": "af0178d0bb154c3995732e63b94842ca",
"text": "Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains.",
"title": ""
},
{
"docid": "d59c6a2dd4b6bf7229d71f3ae036328a",
"text": "Community search over large graphs is a fundamental problem in graph analysis. Recent studies propose to compute top-k influential communities, where each reported community not only is a cohesive subgraph but also has a high influence value. The existing approaches to the problem of top-k influential community search can be categorized as index-based algorithms and online search algorithms without indexes. The index-based algorithms, although being very efficient in conducting community searches, need to pre-compute a specialpurpose index and only work for one built-in vertex weight vector. In this paper, we investigate online search approaches and propose an instance-optimal algorithm LocalSearch whose time complexity is linearly proportional to the size of the smallest subgraph that a correct algorithm needs to access without indexes. In addition, we also propose techniques to make LocalSearch progressively compute and report the communities in decreasing influence value order such that k does not need to be specified. Moreover, we extend our framework to the general case of top-k influential community search regarding other cohesiveness measures. Extensive empirical studies on real graphs demonstrate that our algorithms outperform the existing online search algorithms by several orders of magnitude.",
"title": ""
},
{
"docid": "a0787399eaca5b59a87ed0644da10fc6",
"text": "This work faces the problem of combining the outputs of two co-siting BTS, one operating with 2G networks and the other with 3G (or 4G) networks. This requirement is becoming more and more frequent because many operators, for increasing the capacity for data and voice signal transmission, have overlaid the new network in 3G or 4G technology to the existing 2G infrastructure. The solution here proposed is constituted by a low loss combiner realized through a directional double single-sided filtering system, which manages both TX and RX signals from each BTS output. The design approach for the combiner architecture is described with a particular emphasis on the synthesis of the double single-sided filters (realized by means of extracted pole technique). A prototype of the low-loss combiner has been designed and fabricated for validating the proposed approach. The results obtained are here discussed making into evidence the pros & cons of the proposed solution.",
"title": ""
},
{
"docid": "b7e8da8733a2edd31d1fe53236f5eedf",
"text": "Cancer stem cell (CSC) biology and tumor immunology have shaped our understanding of tumorigenesis. However, we still do not fully understand why tumors can be contained but not eliminated by the immune system and whether rare CSCs are required for tumor propagation. Long latency or recurrence periods have been described for most tumors. Conceptually, this requires a subset of malignant cells which is capable of initiating tumors, but is neither eliminated by immune cells nor able to grow straight into overt tumors. These criteria would be fulfilled by CSCs. Stem cells are pluripotent, immune-privileged, and long-living, but depend on specialized niches. Thus, latent tumors may be maintained by a niche-constrained reservoir of long-living CSCs that are exempt from immunosurveillance while niche-independent and more immunogenic daughter cells are constantly eliminated. The small subpopulation of CSCs is often held responsible for tumor initiation, metastasis, and recurrence. Experimentally, this hypothesis was supported by the observation that only this subset can propagate tumors in non-obese diabetic/scid mice, which lack T and B cells. Yet, the concept was challenged when an unexpectedly large proportion of melanoma cells were found to be capable of seeding complex tumors in mice which further lack NK cells. Moreover, the link between stem cell-like properties and tumorigenicity was not sustained in these highly immunodeficient animals. In humans, however, tumor-propagating cells must also escape from immune-mediated destruction. The ability to persist and to initiate neoplastic growth in the presence of immunosurveillance - which would be lost in a maximally immunodeficient animal model - could hence be a decisive criterion for CSCs. Consequently, integrating scientific insight from stem cell biology and tumor immunology to build a new concept of \"CSC immunology\" may help to reconcile the outlined contradictions and to improve our understanding of tumorigenesis.",
"title": ""
},
{
"docid": "f1d0fc62f47c5fd4f47716a337fd9ed0",
"text": "We present the system architecture of a mobile outdoor augmented reality system for the Archeoguide project. We begin with a short introduction to the project. Then we present the hardware we chose for the mobile system and we describe the system architecture we designed for the software implementation. We conclude this paper with the first results obtained from experiments we made during our trials at ancient Olympia in Greece.",
"title": ""
},
{
"docid": "40fda9cba754c72f1fba17dd3a5759b2",
"text": "Humans can easily recognize handwritten words, after gaining basic knowledge of languages. This knowledge needs to be transferred to computers for automatic character recognition. The work proposed in this paper tries to automate recognition of handwritten hindi isolated characters using multiple classifiers. For feature extraction, it uses histogram of oriented gradients as one feature and profile projection histogram as another feature. The performance of various classifiers has been evaluated using theses features experimentally and quadratic SVM has been found to produce better results.",
"title": ""
},
{
"docid": "8387c06436e850b4fb00c6b5e0dcf19f",
"text": "Since the beginning of the epidemic, human immunodeficiency virus (HIV) has infected around 70 million people worldwide, most of whom reside is sub-Saharan Africa. There have been very promising developments in the treatment of HIV with anti-retroviral drug cocktails. However, drug resistance to anti-HIV drugs is emerging, and many people infected with HIV have adverse reactions or do not have ready access to currently available HIV chemotherapies. Thus, there is a need to discover new anti-HIV agents to supplement our current arsenal of anti-HIV drugs and to provide therapeutic options for populations with limited resources or access to currently efficacious chemotherapies. Plant-derived natural products continue to serve as a reservoir for the discovery of new medicines, including anti-HIV agents. This review presents a survey of plants that have shown anti-HIV activity, both in vitro and in vivo.",
"title": ""
},
{
"docid": "32097bd3faa683f451ae982554f8ef5b",
"text": "According to the growth of the Internet technology, there is a need to develop strategies in order to maintain security of system. One of the most effective techniques is Intrusion Detection System (IDS). This system is created to make a complete security in a computerized system, in order to pass the Intrusion system through the firewall, antivirus and other security devices detect and deal with it. The Intrusion detection techniques are divided into two groups which includes supervised learning and unsupervised learning. Clustering which is commonly used to detect possible attacks is one of the branches of unsupervised learning. Fuzzy sets play an important role to reduce spurious alarms and Intrusion detection, which have uncertain quality.This paper investigates k-means fuzzy and k-means algorithm in order to recognize Intrusion detection in system which both of the algorithms use clustering method.",
"title": ""
},
{
"docid": "63b2bc943743d5b8ef9220fd672df84f",
"text": "In multiagent systems, we often have a set of agents each of which have a preference ordering over a set of items and one would like to know these preference orderings for various tasks, for example, data analysis, preference aggregation, voting etc. However, we often have a large number of items which makes it impractical to ask the agents for their complete preference ordering. In such scenarios, we usually elicit these agents’ preferences by asking (a hopefully small number of) comparison queries — asking an agent to compare two items. Prior works on preference elicitation focus on unrestricted domain and the domain of single peaked preferences and show that the preferences in single peaked domain can be elicited by much less number of queries compared to unrestricted domain. We extend this line of research and study preference elicitation for single peaked preferences on trees which is a strict superset of the domain of single peaked preferences. We show that the query complexity crucially depends on the number of leaves, the path cover number, and the distance from path of the underlying single peaked tree, whereas the other natural parameters like maximum degree, diameter, pathwidth do not play any direct role in determining query complexity. We then investigate the query complexity for finding a weak Condorcet winner for preferences single peaked on a tree and show that this task has much less query complexity than preference elicitation. Here again we observe that the number of leaves in the underlying single peaked tree and the path cover number of the tree influence the query complexity of the problem.",
"title": ""
},
{
"docid": "d8802a7fcdbd306bd474f3144bc688a4",
"text": "Shape from defocus (SFD) is one of the most popular techniques in monocular 3D vision. While most SFD approaches require two or more images of the same scene captured at a fixed view point, this paper presents an efficient approach to estimate absolute depth from a single defocused image. Instead of directly measuring defocus level of each pixel, we propose to design a sequence of aperture-shape filters to segment a defocused image by defocus level. A boundary-weighted belief propagation algorithm is employed to obtain a smooth depth map. We also give an estimation of depth error. Extensive experiments show that our approach outperforms the state-of-the-art single-image SFD approaches both in precision of the estimated absolute depth and running time.",
"title": ""
},
{
"docid": "eb8d681fcfd5b18c15dd09738ab4717c",
"text": "Building a dialogue agent to fulfill complex tasks, such as travel planning, is challenging because the agent has to learn to collectively complete multiple subtasks. For example, the agent needs to reserve a hotel and book a flight so that there leaves enough time for commute between arrival and hotel check-in. This paper addresses this challenge by formulating the task in the mathematical framework of options over Markov Decision Processes (MDPs), and proposing a hierarchical deep reinforcement learning approach to learning a dialogue manager that operates at different temporal scales. The dialogue manager consists of (1) a top-level dialogue policy that selects among subtasks or options, (2) a low-level dialogue policy that selects primitive actions to complete the subtask given by the top-level policy, and (3) a global state tracker that helps ensure all cross-subtask constraints be satisfied. Experiments on a travel planning task with simulated and real users show that our approach leads to significant improvements over two baselines, one based on handcrafted rules and the other based on flat deep reinforcement learning.",
"title": ""
},
{
"docid": "e236a7cd184bbd09c9ffd90ad4cfd636",
"text": "It has been a challenge for financial economists to explain some stylized facts observed in securities markets, among them, high levels of trading volume. The most prominent explanation of excess volume is overconfidence. High market returns make investors overconfident and as a consequence, these investors trade more subsequently. The aim of our paper is to study the impact of the phenomenon of overconfidence on the trading volume and its role in the formation of the excess volume on the Tunisian stock market. Based on the work of Statman, Thorley and Vorkink (2006) and by using VAR models and impulse response functions, we find little evidence of the overconfidence hypothesis when we use volume (shares traded) as proxy of trading volume.",
"title": ""
},
{
"docid": "957863eafec491fae0710dd33c043ba8",
"text": "In this paper, we present an automated behavior analysis system developed to assist the elderly and individuals with disabilities who live alone, by learning and predicting standard behaviors to improve the efficiency of their healthcare. Established behavioral patterns have been recorded using wireless sensor networks composed by several event-based sensors that captured raw measures of the actions of each user. Using these data, behavioral patterns of the residents were extracted using Bayesian statistics. The behavior was statistically estimated based on three probabilistic features we introduce, namely sensor activation likelihood, sensor sequence likelihood, and sensor event duration likelihood. Real data obtained from different home environments were used to verify the proposed method in the individual analysis. The results suggest that the monitoring system can be used to detect anomalous behavior signs which could reflect changes in health status of the user, thus offering an opportunity to intervene if required.",
"title": ""
},
{
"docid": "9e7ff381dc439d9129ba936c7f067189",
"text": "We present a method for the extraction of synonyms for German particle verbs based on a word-aligned German-English parallel corpus: by translating the particle verb to a pivot, which is then translated back, a set of synonym candidates can be extracted and ranked according to the respective translation probabilities. In order to deal with separated particle verbs, we apply re-ordering rules to the German part of the data. In our evaluation against a gold standard, we compare different pre-processing strategies (lemmatized vs. inflected forms) and introduce language model scores of synonym candidates in the context of the input particle verb as well as distributional similarity as additional re-ranking criteria. Our evaluation shows that distributional similarity as a re-ranking feature is more robust than language model scores and leads to an improved ranking of the synonym candidates. In addition to evaluating against a gold standard, we also present a small-scale manual evaluation.",
"title": ""
},
{
"docid": "4fd421bbe92b40e85ffd66cf0084b1b8",
"text": "Real-time performance of adaptive digital signal processing algorithms is required in many applications but it often means a high computational load for many conventional processors. In this paper, we present a configurable hardware architecture for adaptive processing of noisy signals for target detection based on Constant False Alarm Rate (CFAR) algorithms. The architecture has been designed to deal with parallel/pipeline processing and to be configured for three version of CFAR algorithms, the Cell-Average, the Max and the Min CFAR. The proposed architecture has been implemented on a Field Programmable Gate Array (FPGA) device providing good performance improvements over software implementations. FPGA implementation results are presented and discussed.",
"title": ""
},
{
"docid": "183df189a37dc4c4a174792fb8464d3d",
"text": "Rule engines form an essential component of most service execution frameworks in a Service Oriented Architecture (SOA) ecosystem. The efficiency of a service execution framework critically depends on the performance of the rule engine it uses to manage it's operations. Most common rule engines suffer from the fundamental performance issues of the Rete algorithm that they internally use for faster matching of rules against incoming facts. In this paper, we present the design of a scalable architecture of a service rule engine, where a rule clustering and hashing based mechanism is employed for lazy loading of relevant service rules and a prediction based technique for rule evaluation is used for faster actuation of the rules. We present experimental results to demonstrate the efficacy of the proposed rule engine framework over contemporary ones.",
"title": ""
}
] |
scidocsrr
|
63a8336428573c7ebc00658f69108a58
|
Measuring Calorie and Nutrition From Food Image
|
[
{
"docid": "f4fbd925fb46f05c526b228993f5e326",
"text": "Obesity in the world has spread to epidemic proportions. In 2008 the World Health Organization (WHO) reported that 1.5 billion adults were suffering from some sort of overweightness. Obesity treatment requires constant monitoring and a rigorous control and diet to measure daily calorie intake. These controls are expensive for the health care system, and the patient regularly rejects the treatment because of the excessive control over the user. Recently, studies have suggested that the usage of technology such as smartphones may enhance the treatments of obesity and overweight patients; this will generate a degree of comfort for the patient, while the dietitian can count on a better option to record the food intake for the patient. In this paper we propose a smart system that takes advantage of the technologies available for the Smartphones, to build an application to measure and monitor the daily calorie intake for obese and overweight patients. Via a special technique, the system records a photo of the food before and after eating in order to estimate the consumption calorie of the selected food and its nutrient components. Our system presents a new instrument in food intake measuring which can be more useful and effective.",
"title": ""
}
] |
[
{
"docid": "bfcb1fd882a328daab503a7dd6b6d0a6",
"text": "The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability — via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several non-trivial examples.",
"title": ""
},
{
"docid": "3ebe9aecd4c84e9b9ed0837bd294b4ed",
"text": "A bond graph model of a hybrid electric vehicle (HEV) powertrain test cell is proposed. The test cell consists of a motor/generator coupled to a HEV powertrain and powered by a bidirectional power converter. Programmable loading conditions, including positive and negative resistive and inertial loads of any magnitude are modeled, avoiding the use of mechanical inertial loads involved in conventional test cells. The dynamics and control equations of the test cell are derived directly from the bond graph models. The modeling and simulation results of the dynamics of the test cell are validated through experiments carried out on a scaled-down system.",
"title": ""
},
{
"docid": "2313822a08269b3dd125190c4874b808",
"text": "General-purpose knowledge bases are increasingly growing in terms of depth (content) and width (coverage). Moreover, algorithms for entity linking and entity retrieval have improved tremendously in the past years. These developments give rise to a new line of research that exploits and combines these developments for the purposes of text-centric information retrieval applications. This tutorial focuses on a) how to retrieve a set of entities for an ad-hoc query, or more broadly, assessing relevance of KB elements for the information need, b) how to annotate text with such elements, and c) how to use this information to assess the relevance of text. We discuss different kinds of information available in a knowledge graph and how to leverage each most effectively.\n We start the tutorial with a brief overview of different types of knowledge bases, their structure and information contained in popular general-purpose and domain-specific knowledge bases. In particular, we focus on the representation of entity-centric information in the knowledge base through names, terms, relations, and type taxonomies. Next, we will provide a recap on ad-hoc object retrieval from knowledge graphs as well as entity linking and retrieval. This is essential technology, which the remainder of the tutorial builds on. Next we will cover essential components within successful entity linking systems, including the collection of entity name information and techniques for disambiguation with contextual entity mentions. We will present the details of four previously proposed systems that successfully leverage knowledge bases to improve ad-hoc document retrieval. These systems combine the notion of entity retrieval and semantic search on one hand, with text retrieval models and entity linking on the other. Finally, we also touch on entity aspects and links in the knowledge graph as it can help to understand the entities' context.\n This tutorial is the first to compile, summarize, and disseminate progress in this emerging area and we provide both an overview of state-of-the-art methods and outline open research problems to encourage new contributions.",
"title": ""
},
{
"docid": "876dd0a985f00bb8145e016cc8593a84",
"text": "This paper presents how to synthesize a texture in a procedural way that preserves the features of the input exemplar. The exemplar is analyzed in both spatial and frequency domains to be decomposed into feature and non-feature parts. Then, the non-feature parts are reproduced as a procedural noise, whereas the features are independently synthesized. They are combined to output a non-repetitive texture that also preserves the exemplar’s features. The proposed method allows the user to control the extent of extracted features and also enables a texture to edited quite effectively.",
"title": ""
},
{
"docid": "9df6a4c0143cfc3a0b1263b1fa07e810",
"text": "In this paper, we propose a new fast dehazing method from single image based on filtering. The basic idea is to compute an accurate atmosphere veil that is not only smoother, but also respect with depth information of the underlying image. We firstly obtain an initial atmosphere scattering light through median filtering, then refine it by guided joint bilateral filtering to generate a new atmosphere veil which removes the abundant texture information and recovers the depth edge information. Finally, we solve the scene radiance using the atmosphere attenuation model. Compared with exiting state of the art dehazing methods, our method could get a better dehazing effect at distant scene and places where depth changes abruptly. Our method is fast with linear complexity in the number of pixels of the input image; furthermore, as our method can be performed in parallel, thus it can be further accelerated using GPU, which makes our method applicable for real-time requirement.",
"title": ""
},
{
"docid": "6aaabe17947bc455d940047745ed7962",
"text": "In this paper, we want to study how natural and engineered systems could perform complex optimizations with limited computational and communication capabilities. We adopt a continuous-time dynamical system view rooted in early work on optimization and more recently in network protocol design, and merge it with the dynamic view of distributed averaging systems. We obtain a general approach, based on the control system viewpoint, that allows to analyze and design (distributed) optimization systems converging to the solution of given convex optimization problems. The control system viewpoint provides many insights and new directions of research. We apply the framework to a distributed optimal location problem and demonstrate the natural tracking and adaptation capabilities of the system to changing constraints.",
"title": ""
},
{
"docid": "6c647c3260c0a31cac1a3cd412919aad",
"text": "Twitter is a micro-blogging site that allows users and companies to post brief pieces of information called Tweets . Some of the tweets contain keywords such as Hashtags denoted with a # , essentially one word summaries of either the topic or emotion of the tweet. The goal of this paper is to examine an approach to perform hashtag discovery on Twitter posts that do not contain user labeled hashtags. The process described in this paper is geared to be as automatic as possible, taking advantage of web information, sentiment analysis, geographic location, basic filtering and classification processes, to generate hashtags for tweets. Hashtags provide users and search queries a fast and simple basis to filter and find information that they are interested in.",
"title": ""
},
{
"docid": "a20a03fcb848c310cb966f6e6bc37c86",
"text": "A broad class of problems at the core of computational imaging, sensing, and low-level computer vision reduces to the inverse problem of extracting latent images that follow a prior distribution, from measurements taken under a known physical image formation model. Traditionally, hand-crafted priors along with iterative optimization methods have been used to solve such problems. In this paper we present unrolled optimization with deep priors, a principled framework for infusing knowledge of the image formation into deep networks that solve inverse problems in imaging, inspired by classical iterative methods. We show that instances of the framework outperform the state-of-the-art by a substantial margin for a wide variety of imaging problems, such as denoising, deblurring, and compressed sensing magnetic resonance imaging (MRI). Moreover, we conduct experiments that explain how the framework is best used and why it outperforms previous methods.",
"title": ""
},
{
"docid": "d5a772fa54c9a0d40e7a879831f79654",
"text": "It is almost certainly the case that many populations have always existed as metapopulations, leading to the conclusion that local extinctions are common and normally balanced by migrations. This conclusion has major consequences for biodiversity conservation in fragmented tropical forests and the agricultural matrices in which they are embedded. Here we make the argument that the conservation paradigm that focuses on setting aside pristine forests while ignoring the agricultural landscape is a failed strategy in light of what is now conventional wisdom in ecology. Given the fragmented nature of most tropical ecosystems, agricultural landscapes should be an essential component of any conservation strategy. We review the literature on biodiversity in tropical agricultural landscapes and present evidence that many tropical agricultural systems have high levels of biodiversity (planned and associated). These systems represent, not only habitat for biodiversity, but also a high-quality matrix that permits the movement of forest organisms among patches of natural vegetation. We review a variety of agroecosystem types and conclude that diverse, low-input systems using agroecological principles are probably the best option for a high-quality matrix. Such systems are most likely to be constructed by small farmers with land titles, who, in turn, are normally the consequence of grassroots social movements. Therefore, the new conservation paradigm should incorporate a landscape approach in which small farmers, through their social organizations, work with conservationists to create a landscape matrix dominated by productive agroecological systems that facilitate interpatch migration while promoting a sustainable and dignified livelihood for rural communities.",
"title": ""
},
{
"docid": "89dd97465c8373bb9dabf3cbb26a4448",
"text": "Unidirectional connections from the cortex to the matrix of the corpus striatum initiate the cortico-basal ganglia (BG)-thalamocortical loop, thought to be important in momentary action selection and in longer-term fine tuning of behavioural repertoire; a discrete set of striatal compartments, striosomes, has the complementary role of registering or anticipating reward that shapes corticostriatal plasticity. Re-entrant signals traversing the cortico-BG loop impact predominantly frontal cortices, conveyed through topographically ordered output channels; by contrast, striatal input signals originate from a far broader span of cortex, and are far more divergent in their termination. The term ‘disclosed loop’ is introduced to describe this organisation: a closed circuit that is open to outside influence at the initial stage of cortical input. The closed circuit component of corticostriatal afferents is newly dubbed ‘operative’, as it is proposed to establish the bid for action selection on the part of an incipient cortical action plan; the broader set of converging corticostriatal afferents is described as contextual. A corollary of this proposal is that every unit of the striatal volume, including the long, C-shaped tail of the caudate nucleus, should receive a mandatory component of operative input, and hence include at least one area of BG-recipient cortex amongst the sources of its corticostriatal afferents. Individual operative afferents contact twin classes of GABAergic striatal projection neuron (SPN), distinguished by their neurochemical character, and onward circuitry. This is the basis of the classic direct and indirect pathway model of the cortico-BG loop. Each pathway utilises a serial chain of inhibition, with two such links, or three, providing positive and negative feedback, respectively. Operative co-activation of direct and indirect SPNs is, therefore, pictured to simultaneously promote action, and to restrain it. The balance of this rival activity is determined by the contextual inputs, which summarise the external and internal sensory environment, and the state of ongoing behavioural priorities. Notably, the distributed sources of contextual convergence upon a striatal locus mirror the transcortical network harnessed by the origin of the operative input to that locus, thereby capturing a similar set of contingencies relevant to determining action. The disclosed loop formulation of corticostriatal and subsequent BG loop circuitry, as advanced here, refines the operating rationale of the classic model and allows the integration of more recent anatomical and physiological data, some of which can appear at variance with the classic model. Equally, it provides a lucid functional context for continuing cellular studies of SPN biophysics and mechanisms of synaptic plasticity.",
"title": ""
},
{
"docid": "0b1bb42b175ed925b357112d869d3ddd",
"text": "While location is one of the most important context information in mobile and ubiquitous computing, large-scale deployment of indoor localization system remains elusive.\n In this work, we propose PiLoc, an indoor localization system that utilizes opportunistically sensed data contributed by users. Our system does not require manual calibration, prior knowledge and infrastructure support. The key novelty of PiLoc is that it merges walking segments annotated with displacement and signal strength information from users to derive a map of walking paths annotated with radio signal strengths.\n We evaluate PiLoc over 4 different indoor areas. Evaluation shows that our system can achieve an average localization error of 1.5m.",
"title": ""
},
{
"docid": "27408da448d237ec9bfe7f2eeb94743c",
"text": "Background: Vaccinium arctostaphylos L. (Caucasian whortleberry) fruit is used as an antihyperglycemic agent for treatment of diabetes mellitus. Objective: The effects of whortleberry fruit and leaf extracts on the blood levels of fasting glucose, HbA1c (glycosylated hemoglobin), insulin, creatinine and liver enzymes SGOT and SGPT in alloxan-diabetic rats as well as LD50s of the extracts in rats were studied. Methods: The effects of 2 months daily gavage of each extract at the doses of 250 mg/kg, 500 mg/kg and 1000 mg/kg on the parameters after single alloxan intraperitoneal injection at a dose of 125 mg/kg in the rats were evaluated. To calculate LD50 (median lethal dose), each extract was gavaged to groups of 30 healthy male and female Wistar rats at various doses once and the number of dead animals in each group within 72 hours was determined. Results: Alloxan injection resulted in significant increase of fasting glucose and HbA1c levels but decreased insulin levels significantly. Oral administration of whortleberry fruit and leaf extracts (each at the doses of 250, 500 and 1000 mg/kg) significantly reduced the fasting glucose and HbA1c levels but significantly increased the insulin levels without any significant effects on the SGOT, SGPT and creatinine levels in the diabetic rats compared with the control diabetic rats. The LD50s of the extracts were more than 15 g/kg. Conclusion: Whortleberry fruits and leaves may have anti-hyperglycemic and blood insulin level elevating effects without hepatic and renal toxicities in the alloxan-diabetic rats and are relatively nontoxic in rats.",
"title": ""
},
{
"docid": "42c297b74abd95bbe70bb00ddb0aa925",
"text": "IMPASS (Intelligent Mobility Platform with Active Spoke System) is a novel locomotion system concept that utilizes rimless wheels with individually actuated spokes to provide the ability to step over large obstacles like legs, adapt to uneven surfaces like tracks, yet retaining the speed and simplicity of wheels. Since it lacks the complexity of legs and has a large effective (wheel) diameter, this highly adaptive system can move over extreme terrain with ease while maintaining respectable travel speeds. This paper presents the concept, preliminary kinematic analyses and design of an IMPASS based robot with two actuated spoke wheels and an articulated tail. The actuated spoke wheel concept allows multiple modes of motion, which give it the ability to assume a stable stance using three contact points per wheel, walk with static stability with two contact points per wheel, or stride quickly using one contact point per wheel. Straight-line motion and considerations for turning are discussed for the oneand two-point contact schemes followed by the preliminary design and recommendations for future study. Index Terms – IMPASS, rimless wheel, actuated spoke wheel, mobility, locomotion.",
"title": ""
},
{
"docid": "5cfc2b3a740d0434cf0b3c2812bd6e7a",
"text": "Well, someone can decide by themselves what they want to do and need to do but sometimes, that kind of person will need some a logical approach to discrete math references. People with open minded will always try to seek for the new things and information from many sources. On the contrary, people with closed mind will always think that they can do it by their principals. So, what kind of person are you?",
"title": ""
},
{
"docid": "8980bdf92581e8a0816364362fec409b",
"text": "OBJECTIVE\nPrenatal exposure to inappropriate levels of glucocorticoids (GCs) and maternal stress are putative mechanisms for the fetal programming of later health outcomes. The current investigation examined the influence of prenatal maternal cortisol and maternal psychosocial stress on infant physiological and behavioral responses to stress.\n\n\nMETHODS\nThe study sample comprised 116 women and their full term infants. Maternal plasma cortisol and report of stress, anxiety and depression were assessed at 15, 19, 25, 31 and 36 + weeks' gestational age. Infant cortisol and behavioral responses to the painful stress of a heel-stick blood draw were evaluated at 24 hours after birth. The association between prenatal maternal measures and infant cortisol and behavioral stress responses was examined using hierarchical linear growth curve modeling.\n\n\nRESULTS\nA larger infant cortisol response to the heel-stick procedure was associated with exposure to elevated concentrations of maternal cortisol during the late second and third trimesters. Additionally, a slower rate of behavioral recovery from the painful stress of a heel-stick blood draw was predicted by elevated levels of maternal cortisol early in pregnancy as well as prenatal maternal psychosocial stress throughout gestation. These associations could not be explained by mode of delivery, prenatal medical history, socioeconomic status or child race, sex or birth order.\n\n\nCONCLUSIONS\nThese data suggest that exposure to maternal cortisol and psychosocial stress exerts programming influences on the developing fetus with consequences for infant stress regulation.",
"title": ""
},
{
"docid": "57e70bca420ca75412758ef8591c99ab",
"text": "We present graph partition neural networks (GPNN), an extension of graph neural networks (GNNs) able to handle extremely large graphs. GPNNs alternate between locally propagating information between nodes in small subgraphs and globally propagating information between the subgraphs. To efficiently partition graphs, we experiment with spectral partitioning and also propose a modified multi-seed flood fill for fast processing of large scale graphs. We extensively test our model on a variety of semi-supervised node classification tasks. Experimental results indicate that GPNNs are either superior or comparable to state-of-the-art methods on a wide variety of datasets for graph-based semi-supervised classification. We also show that GPNNs can achieve similar performance as standard GNNs with fewer propagation steps.",
"title": ""
},
{
"docid": "2b9bc83596deb55302bb6f4314410269",
"text": "Collaborative Filtering: A Machine Learning Perspective Benjamin Marlin Master of Science Graduate Department of Computer Science University of Toronto 2004 Collaborative filtering was initially proposed as a framework for filtering information based on the preferences of users, and has since been refined in many different ways. This thesis is a comprehensive study of rating-based, pure, non-sequential collaborative filtering. We analyze existing methods for the task of rating prediction from a machine learning perspective. We show that many existing methods proposed for this task are simple applications or modifications of one or more standard machine learning methods for classification, regression, clustering, dimensionality reduction, and density estimation. We introduce new prediction methods in all of these classes. We introduce a new experimental procedure for testing stronger forms of generalization than has been used previously. We implement a total of nine prediction methods, and conduct large scale prediction accuracy experiments. We show interesting new results on the relative performance of these methods.",
"title": ""
},
{
"docid": "115b89c782465a740e5e7aa2cae52669",
"text": "Japan discards approximately 18 million tonnes of food annually, an amount that accounts for 40% of national food production. In recent years, a number of measures have been adopted at the institutional level to tackle this issue, showing increasing commitment of the government and other organizations. Along with the aim of environmental sustainability, food waste recycling, food loss prevention and consumer awareness raising in Japan are clearly pursuing another common objective. Although food loss and waste problems have been publicly acknowledged only very recently, strong implications arise from the economic and cultural history of the Japanese food system. Specific national concerns over food security have accompanied the formulation of current national strategies whose underlying causes and objectives add a unique facet to Japan’s efforts with respect to those of other developed countries’. Fighting Food Loss and Food Waste in Japan",
"title": ""
},
{
"docid": "c4dbfff3966e2694727aa171e29fa4bd",
"text": "The ability to recognize known places is an essential competence of any intelligent system that operates autonomously over longer periods of time. Approaches that rely on the visual appearance of distinct scenes have recently been developed and applied to large scale SLAM scenarios. FAB-Map is maybe the most successful of these systems. Our paper proposes BRIEF-Gist, a very simplistic appearance-based place recognition system based on the BRIEF descriptor. BRIEF-Gist is much more easy to implement and more efficient compared to recent approaches like FAB-Map. Despite its simplicity, we can show that it performs comparably well as a front-end for large scale SLAM. We benchmark our approach using two standard datasets and perform SLAM on the 66 km long urban St. Lucia dataset.",
"title": ""
}
] |
scidocsrr
|
204df2cafff8e3fb05db32b61d2a4ae9
|
COBBLER: combining column and row enumeration for closed pattern discovery
|
[
{
"docid": "b8dae71335b9c6caa95bed38d32f102a",
"text": "Mining frequent closed itemsets provides complete and non-redundant results for frequent pattern analysis. Extensive studies have proposed various strategies for efficient frequent closed itemset mining, such as depth-first search vs. breadthfirst search, vertical formats vs. horizontal formats, tree-structure vs. other data structures, top-down vs. bottom-up traversal, pseudo projection vs. physical projection of conditional database, etc. It is the right time to ask \"what are the pros and cons of the strategies?\" and \"what and how can we pick and integrate the best strategies to achieve higher performance in general cases?\"In this study, we answer the above questions by a systematic study of the search strategies and develop a winning algorithm CLOSET+. CLOSET+ integrates the advantages of the previously proposed effective strategies as well as some ones newly developed here. A thorough performance study on synthetic and real data sets has shown the advantages of the strategies and the improvement of CLOSET+ over existing mining algorithms, including CLOSET, CHARM and OP, in terms of runtime, memory usage and scalability.",
"title": ""
},
{
"docid": "e66f2052a2e9a7e870f8c1b4f2bfb56d",
"text": "New algorithms with previous native palm pdf reader approaches, with gains of over an order of magnitude using.We present two new algorithms for solving this problem. Regularities, association rules, and gave an algorithm for finding such rules. 4 An.fast discovery of association rules based on our ideas in 33, 35. New algorithms with previous approaches, with gains of over an order of magnitude using.",
"title": ""
}
] |
[
{
"docid": "c39fe902027ba5cb5f0fa98005596178",
"text": "Twitter is used extensively in the United States as well as globally, creating many opportunities to augment decision support systems with Twitterdriven predictive analytics. Twitter is an ideal data source for decision support: its users, who number in the millions, publicly discuss events, emotions, and innumerable other topics; its content is authored and distributed in real time at no charge; and individual messages (also known as tweets) are often tagged with precise spatial and temporal coordinates. This article presents research investigating the use of spatiotemporally tagged tweets for crime prediction. We use Twitter-specific linguistic analysis and statistical topic modeling to automatically identify discussion topics across a major city in the United States. We then incorporate these topics into a crime prediction model and show that, for 19 of the 25 crime types we studied, the addition of Twitter data improves crime prediction performance versus a standard approach based on kernel density estimation. We identify a number of performance bottlenecks that could impact the use of Twitter in an actual decision support system. We also point out important areas of future work for this research, including deeper semantic analysis of message con∗Email address: [email protected]; Tel.: 1+ 434 924 5397; Fax: 1+ 434 982 2972 Preprint submitted to Decision Support Systems January 14, 2014 tent, temporal modeling, and incorporation of auxiliary data sources. This research has implications specifically for criminal justice decision makers in charge of resource allocation for crime prevention. More generally, this research has implications for decision makers concerned with geographic spaces occupied by Twitter-using individuals.",
"title": ""
},
{
"docid": "e5206df50c9a1477928df7a21e054489",
"text": "Reasoning about the relationships between object pairs in images is a crucial task for holistic scene understanding. Most of the existing works treat this task as a pure visual classification task: each type of relationship or phrase is classified as a relation category based on the extracted visual features. However, each kind of relationships has a wide variety of object combination and each pair of objects has diverse interactions. Obtaining sufficient training samples for all possible relationship categories is difficult and expensive. In this work, we propose a natural language guided framework to tackle this problem. We propose to use a generic bi-directional recurrent neural network to predict the semantic connection between the participating objects in the relationship from the aspect of natural language. The proposed simple method achieves the state-of-the-art on the Visual Relationship Detection (VRD) and Visual Genome datasets, especially when predicting unseen relationships (e.g., recall improved from 76.42% to 89.79% on VRD zeroshot testing set).",
"title": ""
},
{
"docid": "6625c2f456bb09c4e4668b7326247e02",
"text": "The More-Electric Aircraft (MEA) underlines the utilization of the electrical power to power the non-propulsive aircraft systems. Adopting the MEA achieves numerous advantages such as optimizing the aircraft performance and decreasing operating and maintenance costs. Moreover, the MEA reduces the emission of the air pollutant gases from the aircraft, which can contribute in solving the problem of climate change. However, the MEA put some challenge on the aircraft electrical system either in the amount of the required power or the processing and management of this power. This paper introduces a review for the MEA. The review includes the different options of generation and power system architectures.",
"title": ""
},
{
"docid": "a361e1ee0b840296e840057d25bbb906",
"text": "In recent decades, we have witnessed the evolution of biometric technology from the first pioneering works in face and voice recognition to the current state of development wherein a wide spectrum of highly accurate systems may be found, ranging from largely deployed modalities, such as fingerprint, face, or iris, to more marginal ones, such as signature or hand. This path of technological evolution has naturally led to a critical issue that has only started to be addressed recently: the resistance of this rapidly emerging technology to external attacks and, in particular, to spoofing. Spoofing, referred to by the term presentation attack in current standards, is a purely biometric vulnerability that is not shared with other IT security solutions. It refers to the ability to fool a biometric system into recognizing an illegitimate user as a genuine one by means of presenting a synthetic forged version of the original biometric trait to the sensor. The entire biometric community, including researchers, developers, standardizing bodies, and vendors, has thrown itself into the challenging task of proposing and developing efficient protection methods against this threat. The goal of this paper is to provide a comprehensive overview on the work that has been carried out over the last decade in the emerging field of antispoofing, with special attention to the mature and largely deployed face modality. The work covers theories, methodologies, state-of-the-art techniques, and evaluation databases and also aims at providing an outlook into the future of this very active field of research.",
"title": ""
},
{
"docid": "e4f186b25468c70c6e2e2841f8a7a97e",
"text": "A novel wearable haptic device for modulating contact forces at the fingertip is presented. Rendering of forces by skin deformation in three degrees of freedom (DoF), with contact—no contact capabilities, was implemented through rigid parallel kinematics. The novel asymmetrical three revolute-spherical-revolute (3-RSR) configuration allowed compact dimensions with minimum encumbrance of the hand workspace. The device was designed to render constant to low frequency deformation of the fingerpad in three DoF, combining light weight with relatively high output forces. A differential method for solving the non-trivial inverse kinematics is proposed and implemented in real time for controlling the device. The first experimental activity evaluated discrimination of different fingerpad stretch directions in a group of five subjects. The second experiment, enrolling 19 subjects, evaluated cutaneous feedback provided in a virtual pick-and-place manipulation task. Stiffness of the fingerpad plus device was measured and used to calibrate the physics of the virtual environment. The third experiment with 10 subjects evaluated interaction forces in a virtual lift-and-hold task. Although with different performance in the two manipulation experiments, overall results show that participants better controlled interaction forces when the cutaneous feedback was active, with significant differences between the visual and visuo-haptic experimental conditions.",
"title": ""
},
{
"docid": "9cc3023d7323b2c83f49a02e22f1c9ba",
"text": "Unmanned Aircraft Systems (UAS) are being used commonly for video surveillance, providing valuable video data and reducing the risks associated with human operators. Thanks to its benefits, the UAS traffic is nearly doubling every year. However, the risks associated with the UAS are also growing. According to the FAA, the volume of air traffic will grow steadily, doubling in the next 20 years. Paired with the exponential growth of the UAS traffic, the risk of collision is also growing as well as privacy concerns. An effective UAS detection and/or tracking method is critically needed for air traffic safety. This research is aimed at developing a system that can identify/detect a UAS, which will subsequently enable counter measures against UAS. The proposed system will identify a UAS through various methods including image processing and mechanical tracking. Once a UAS is detected, a countermeasure can be employed along with the tracking system. In this research, we describe the design, algorithms, and implementation details of the system as well as some performance aspects. The proposed system will help keep the malicious or harmful UAS away from the restricted or residential areas.",
"title": ""
},
{
"docid": "f82eb2d4cc45577f08c7e867bf012816",
"text": "OBJECTIVE\nThe purpose of this study was to compare the retrieval characteristics of the Option Elite (Argon Medical, Plano, Tex) and Denali (Bard, Tempe, Ariz) retrievable inferior vena cava filters (IVCFs), two filters that share a similar conical design.\n\n\nMETHODS\nA single-center, retrospective study reviewed all Option and Denali IVCF removals during a 36-month period. Attempted retrievals were classified as advanced if the routine \"snare and sheath\" technique was initially unsuccessful despite multiple attempts or an alternative endovascular maneuver or access site was used. Patient and filter characteristics were documented.\n\n\nRESULTS\nIn our study, 63 Option and 45 Denali IVCFs were retrieved, with an average dwell time of 128.73 and 99.3 days, respectively. Significantly higher median fluoroscopy times were experienced in retrieving the Option filter compared with the Denali filter (12.18 vs 6.85 minutes; P = .046). Use of adjunctive techniques was also higher in comparing the Option filter with the Denali filter (19.0% vs 8.7%; P = .079). No significant difference was noted between these groups in regard to gender, age, or history of malignant disease.\n\n\nCONCLUSIONS\nOption IVCF retrieval procedures required significantly longer retrieval fluoroscopy time compared with Denali IVCFs. Although procedure time was not analyzed in this study, as a surrogate, the increased fluoroscopy time may also have an impact on procedural direct costs and throughput.",
"title": ""
},
{
"docid": "8b701007a5c7ffd70ced2f244a2b6ee9",
"text": "In-depth interviews and focus group discussions were conducted to inform the development of an instrument to measure the health-related quality of life of children living with HIV. The QOL-CHAI instrument consists of four generic core scales of the \"Pediatric Quality of Life Inventory\" and two HIV-targeted scales-\"symptoms\" and \"discrimination.\" A piloting exercise involving groups of children living with HIV and HIV-negative children born to HIV-infected parents provided evidence for the acceptable psychometric properties and usability of the instrument. It is expected that the QOL-CHAI can serve well as a brief, standardized, and culturally appropriate instrument for assessing health-related quality of life of Indian children living with HIV.",
"title": ""
},
{
"docid": "ef74392a9681d16b14970740cbf85191",
"text": "We propose an efficient physics-based method for dexterous ‘real hand’ - ‘virtual object’ interaction in Virtual Reality environments. Our method is based on the Coulomb friction model, and we show how to efficiently implement it in a commodity VR engine for realtime performance. This model enables very convincing simulations of many types of actions such as pushing, pulling, grasping, or even dexterous manipulations such as spinning objects between fingers without restrictions on the objects' shapes or hand poses. Because it is an analytic model, we do not require any prerecorded data, in contrast to previous methods. For the evaluation of our method, we conduction a pilot study that shows that our method is perceived more realistic and natural, and allows for more diverse interactions. Further, we evaluate the computational complexity of our method to show real-time performance in VR environments.",
"title": ""
},
{
"docid": "da1551db4343ca63f4b910191c5b91a1",
"text": "Current two-stage object detectors, which consists of a region proposal stage and a refinement stage, may produce unreliable results due to ill-localized proposed regions. To address this problem, we propose a context refinement algorithm that explores rich contextual information to better refine each proposed region. In particular, we first identify neighboring regions that may contain useful contexts and then perform refinement based on the extracted and unified contextual information. In practice, our method effectively improves the quality of the final detection results as well as region proposals. Empirical studies show that context refinement yields substantial and consistent improvements over different baseline detectors. Moreover, the proposed algorithm brings around 3% performance gain on PASCAL VOC benchmark and around 6% gain on MS COCO benchmark respectively.",
"title": ""
},
{
"docid": "09538bc92c8bf9818bf84e44024f087c",
"text": "An up-to-date review paper on automotive sensors is presented. Attention is focused on sensors used in production automotive systems. The primary sensor technologies in use today are reviewed and are classified according to their three major areas ofautomotive systems application–powertrain, chassis, and body. This subject is extensive. As described in this paper, for use in automotive systems, there are six types of rotational motion sensors, four types of pressure sensors, five types of position sensors, and three types of temperature sensors. Additionally, two types of mass air flow sensors, five types of exhaust gas oxygen sensors, one type of engine knock sensor, four types of linear acceleration sensors, four types of angular-rate sensors, four types of occupant comfort/convenience sensors, two types of near-distance obstacle detection sensors, four types of far-distance obstacle detection sensors, and and ten types of emerging, state-of the-art, sensors technologies are identified.",
"title": ""
},
{
"docid": "1eab78b995fadb69692b254f41a5028e",
"text": "Raindrops on vehicles' windshields can degrade the performance of in-vehicle vision systems. In this paper, we present a novel approach that detects and removes raindrops in the captured image when using a single in-vehicle camera. When driving in light or moderate rainy conditions, raindrops appear as small circlets on the windshield in each image frame. Therefore, by analyzing the color, texture and shape characteristics of raindrops in images, we first identify possible raindrop candidates in the regions of interest (ROI), which are small locally salient droplets in a raindrop saliency map. Then, a learning-based verification algorithm is proposed to reduce the number of false alarms (i.e., clear regions mis-detected as raindrops). Finally, we fill in the regions occupied by the raindrops using image inpainting techniques. Numerical experiments indicate that the proposed method is capable of detecting and reducing raindrops in various rain and road scenarios. We also quantify the improvement offered by the proposed method over the state-of-the-art algorithms aimed at the same problem and the benefits to the in-vehicle vision applications like clear path detection.",
"title": ""
},
{
"docid": "2e5e5b5342963d89b6710a0145f97b43",
"text": "Spaced repetition learning is an approach for choosing the most efficient intervals between rehearsing learning content. Typically used for tasks like learning vocabulary it also offers great potential for content selection in learning games. Learning games do, however differ from classic spaced repetition learning approaches in that content is not only accessed when indicated by a spaced repetition scheduling algorithm but also when the users simply want to play the game or when they decide to play the game multiple times in a row. In these cases, short term memory effects might mask learning effects in user performance, leading to faulty inputs to the calculation of spaced repetition interval lengths. This paper reviews current research literature on the interaction of short term and long term memory in order to determine how short term memory effects can be coped with in the context of spaced repetition based learning games.",
"title": ""
},
{
"docid": "2bed91cd91b2958eb46af613a8cb4978",
"text": "Millions of HTML tables containing structured data can be found on the Web. With their wide coverage, these tables are potentially very useful for filling missing values and extending cross-domain knowledge bases such as DBpedia, YAGO, or the Google Knowledge Graph. As a prerequisite for being able to use table data for knowledge base extension, the HTML tables need to be matched with the knowledge base, meaning that correspondences between table rows/columns and entities/schema elements of the knowledge base need to be found. This paper presents the T2D gold standard for measuring and comparing the performance of HTML table to knowledge base matching systems. T2D consists of 8 700 schema-level and 26 100 entity-level correspondences between the WebDataCommons Web Tables Corpus and the DBpedia knowledge base. In contrast related work on HTML table to knowledge base matching, the Web Tables Corpus (147 million tables), the knowledge base, as well as the gold standard are publicly available. The gold standard is used afterward to evaluate the performance of T2K Match, an iterative matching method which combines schema and instance matching. T2K Match is designed for the use case of matching large quantities of mostly small and narrow HTML tables against large cross-domain knowledge bases. The evaluation using the T2D gold standard shows that T2K Match discovers table-to-class correspondences with a precision of 94%, row-to-entity correspondences with a precision of 90%, and column-to-property correspondences with a precision of 77%.",
"title": ""
},
{
"docid": "7974d3e3e9c431256ee35c3032288bd1",
"text": "Nowadays, the usage of mobile device among the community worldwide has been tremendously increased. With this proliferation of mobile devices, more users are able to access the internet for variety of online application and services. As the use of mobile devices and applications grows, the rate of vulnerabilities exploitation and sophistication of attack towards the mobile user are increasing as well. To date, Google's Android Operating System (OS) are among the widely used OS for the mobile devices, the openness design and ease of use have made them popular among developer and user. Despite the advantages the android-based mobile devices have, it also invited the malware author to exploit the mobile application on the market. Prior to this matter, this research focused on investigating the behaviour of mobile malware through hybrid approach. The hybrid approach correlates and reconstructs the result from the static and dynamic malware analysis in producing a trace of malicious event. Based on the finding, this research proposed a general mobile malware behaviour model that can contribute in identifying the key features in detecting mobile malware on an Android Platform device.",
"title": ""
},
{
"docid": "b78f935622b143bbbcaff580ba42e35d",
"text": "A churn is defined as the loss of a user in an online social network (OSN). Detecting and analyzing user churn at an early stage helps to provide timely delivery of retention solutions (e.g., interventions, customized services, and better user interfaces) that are useful for preventing users from churning. In this paper we develop a prediction model based on a clustering scheme to analyze the potential churn of users. In the experiment, we test our approach on a real-name OSN which contains data from 77,448 users. A set of 24 attributes is extracted from the data. A decision tree classifier is used to predict churn and non-churn users of the future month. In addition, k-means algorithm is employed to cluster the actual churn users into different groups with different online social networking behaviors. Results show that the churn and nonchurn prediction accuracies of ∼65% and ∼77% are achieved respectively. Furthermore, the actual churn users are grouped into five clusters with distinguished OSN activities and some suggestions of retaining these users are provided.",
"title": ""
},
{
"docid": "d2eacfccb44c7bd80def65b639643a74",
"text": "Many mobile applications running on smartphones and wearable devices would potentially benefit from the accuracy and scalability of deep CNN-based machine learning algorithms. However, performance and energy consumption limitations make the execution of such computationally intensive algorithms on mobile devices prohibitive. We present a GPU-accelerated library, dubbed CNNdroid [1], for execution of trained deep CNNs on Android-based mobile devices. Empirical evaluations show that CNNdroid achieves up to 60X speedup and 130X energy saving on current mobile devices. The CNNdroid open source library is available for download at https://github.com/ENCP/CNNdroid",
"title": ""
},
{
"docid": "63e9d1682131a1b99bef82d4795166d2",
"text": "In this paper, we present a new method for bidirectional relighting for 3D-aided 2D face recognition under large pose and illumination changes. During subject enrollment, we build subject-specific 3D annotated models by using the subjects' raw 3D data and 2D texture. During authentication, the probe 2D images are projected onto a normalized image space using the subject-specific 3D model in the gallery. Then, a bidirectional relighting algorithm and two similarity metrics (a view-dependent complex wavelet structural similarity and a global similarity) are employed to compare the gallery and probe. We tested our algorithms on the UHDB11 and UHDB12 databases that contain 3D data with probe images under large lighting and pose variations. The experimental results show the robustness of our approach in recognizing faces in difficult situations.",
"title": ""
},
{
"docid": "c7ce8c36fbef34a4554d60f86d3335d4",
"text": "This study is aimed to determine the effect of the employee’s personality and organizational culture toward the employee’s performance through the BPR OCB throughout the Gianyar district of Bali province. This study used a quantitative approach to test the hypotheses by the sampling technique proportional simple random sampling of the 105 respondents who are employees not the leader of BPR in Gianyar Bali, the data collecting used in this study is a questionnaire. The data analysis technique used SEM analysis. The results showed that the employee’s personality and organizational culture have an indirect effect on employee performance through OCB of all of BPR in Gianyar Bali.",
"title": ""
},
{
"docid": "f5b02bdd74772ff2454a475e44077c8e",
"text": "This paper presents a new method - adversarial advantage actor-critic (Adversarial A2C), which significantly improves the efficiency of dialogue policy learning in task-completion dialogue systems. Inspired by generative adversarial networks (GAN), we train a discriminator to differentiate responses/actions generated by dialogue agents from responses/actions by experts. Then, we incorporate the discriminator as another critic into the advantage actor-critic (A2C) framework, to encourage the dialogue agent to explore state-action within the regions where the agent takes actions similar to those of the experts. Experimental results in a movie-ticket booking domain show that the proposed Adversarial A2C can accelerate policy exploration efficiently.",
"title": ""
}
] |
scidocsrr
|
db1c2c7164d23024b049857da3a711fb
|
Multilingual Language Processing From Bytes
|
[
{
"docid": "519241b84a8a18cae31a35a291d3bce1",
"text": "Recent work in neural machine translation has shown promising performance, but the most effective architectures do not scale naturally to large vocabulary sizes. We propose and compare three variable-length encoding schemes that represent a large vocabulary corpus using a much smaller vocabulary with no loss in information. Common words are unaffected by our encoding, but rare words are encoded using a sequence of two pseudo-words. Our method is simple and effective: it requires no complete dictionaries, learning procedures, increased training time, changes to the model, or new parameters. Compared to a baseline that replaces all rare words with an unknown word symbol, our best variable-length encoding strategy improves WMT English-French translation performance by up to 1.7 BLEU.",
"title": ""
}
] |
[
{
"docid": "ef66627d34d684e41bc7541b18dfd687",
"text": "This work proposes a simple instance retrieval pipeline based on encoding the convolutional features of CNN using the bag of words aggregation scheme (BoW). Assigning each local array of activations in a convolutional layer to a visual word produces an assignment map, a compact representation that relates regions of an image with a visual word. We use the assignment map for fast spatial reranking, obtaining object localizations that are used for query expansion. We demonstrate the suitability of the BoW representation based on local CNN features for instance retrieval, achieving competitive performance on the Oxford and Paris buildings benchmarks. We show that our proposed system for CNN feature aggregation with BoW outperforms state-of-the-art techniques using sum pooling at a subset of the challenging TRECVid INS benchmark.",
"title": ""
},
{
"docid": "d88ce9c09fdfa0c1ea023ce08183f39b",
"text": "The development of the Internet in recent years has made it possible and useful to access many different information systems anywhere in the world to obtain information. While there is much research on the integration of heterogeneous information systems, most commercial systems stop short of the actual integration of available data. Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.\n This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data fusion, namely, uncertain and conflicting data values. We give an overview and classification of different ways of fusing data and present several techniques based on standard and advanced operators of the relational algebra and SQL. Finally, the article features a comprehensive survey of data integration systems from academia and industry, showing if and how data fusion is performed in each.",
"title": ""
},
{
"docid": "947bb564a2a4207d33ca545d8194add4",
"text": "Classical theories of the firm assume access to reliable signals to measure the causal impact of choice variables on profit. For advertising expenditure we show, using twenty-five online field experiments (representing $2.8 million) with major U.S. retailers and brokerages, that this assumption typically does not hold. Statistical evidence from the randomized trials is very weak because individual-level sales are incredibly volatile relative to the per capita cost of a campaign—a “small” impact on a noisy dependent variable can generate positive returns. A concise statistical argument shows that the required sample size for an experiment to generate sufficiently informative confidence intervals is typically in excess of ten million person-weeks. This also implies that heterogeneity bias (or model misspecification) unaccounted for by observational methods only needs to explain a tiny fraction of the variation in sales to severely bias estimates. The weak informational feedback means most firms cannot even approach profit maximization.",
"title": ""
},
{
"docid": "e405daebeecf41f4b6deadc0f321415f",
"text": "One challenge concerning the reliability of ball grid array (BGA) packages assembly is to detect the void defects occurring inside solder balls. Additionally, for use in mass production, an automated inspection system has increasingly become an attractive solution. In practice, the first procedure of this system is required to segment the individual solder balls from the background. Here, we have proposed the efficient method to analyze the X-ray images obtained from arbitrary rotating angles of a printed circuit board (PCB). Specifically, we have succeeded in dealing with the general pattern of solder balls arrangement by applying the Delaunay triangulation technique and verifying the unit cell of periodic lattice. The proposed method is robust against occluded balls caused by some interference and outperforms the conventional method in term of yielding higher accuracy of solder ball segmentation and lower processing time. These conclusions were confirmed by our experiments.",
"title": ""
},
{
"docid": "af8ddd6792a98ea3b59bdaab7c7fa045",
"text": "This research explores the alternative media ecosystem through a Twitter lens. Over a ten-month period, we collected tweets related to alternative narratives—e.g. conspiracy theories—of mass shooting events. We utilized tweeted URLs to generate a domain network, connecting domains shared by the same user, then conducted qualitative analysis to understand the nature of different domains and how they connect to each other. Our findings demonstrate how alternative news sites propagate and shape alternative narratives, while mainstream media deny them. We explain how political leanings of alternative news sites do not align well with a U.S. left-right spectrum, but instead feature an antiglobalist (vs. globalist) orientation where U.S. Alt-Right sites look similar to U.S. Alt-Left sites. Our findings describe a subsection of the emerging alternative media ecosystem and provide insight in how websites that promote conspiracy theories and pseudo-science may function to conduct underlying political agendas.",
"title": ""
},
{
"docid": "dcf24411ffed0d5bf2709e005f6db753",
"text": "Dynamic Causal Modelling (DCM) is an approach first introduced for the analysis of functional magnetic resonance imaging (fMRI) to quantify effective connectivity between brain areas. Recently, this framework has been extended and established in the magneto/encephalography (M/EEG) domain. DCM for M/EEG entails the inversion a full spatiotemporal model of evoked responses, over multiple conditions. This model rests on a biophysical and neurobiological generative model for electrophysiological data. A generative model is a prescription of how data are generated. The inversion of a DCM provides conditional densities on the model parameters and, indeed on the model itself. These densities enable one to answer key questions about the underlying system. A DCM comprises two parts; one part describes the dynamics within and among neuronal sources, and the second describes how source dynamics generate data in the sensors, using the lead-field. The parameters of this spatiotemporal model are estimated using a single (iterative) Bayesian procedure. In this paper, we will motivate and describe the current DCM framework. Two examples show how the approach can be applied to M/EEG experiments.",
"title": ""
},
{
"docid": "d1bd01a4760f08ebe3557557327108b4",
"text": "This paper investigates the performance of our recently proposed precoding multiuser (MU) MIMO system in indoor visible light communications (VLC). The transmitted data of decentralized users are transmitted by light-emitting-diode (LED) arrays after precoding in a transmitter, by which the MU interference is eliminated. Thus, the complexity of user terminals could be reduced, which results in the reduction of power consumption. The limitation of block diagonalization precoding algorithm in VLC systems is investigated. The corresponding solution by utilizing optical detectors with different fields of view (FOV) is derived, and the impact of FOV to the proposed system is also analyzed. In this paper, we focus on BER and signal-to-noise-ratio performances of the proposed system with the consideration of the mobility of user terminals. Simulation results show that the majority of the indoor region can achieve 100 Mb/s at a BER of 10-6 when single LED chip's power is larger than 10 mW.",
"title": ""
},
{
"docid": "3e749b561a67f2cc608f40b15c71098d",
"text": "As it emerged from philosophical analyses and cognitive research, most concepts exhibit typicality effects, and resist to the efforts of defining them in terms of necessary and sufficient conditions. This holds also in the case of many medical concepts. This is a problem for the design of computer science ontologies, since knowledge representation formalisms commonly adopted in this field (such as, in the first place, the Web Ontology Language OWL) do not allow for the representation of concepts in terms of typical traits. The need of representing concepts in terms of typical traits concerns almost every domain of real world knowledge, including medical domains. In particular, in this article we take into account the domain of mental disorders, starting from the DSM-5 descriptions of some specific disorders. We favour a hybrid approach to concept representation, in which ontology oriented formalisms are combined to a geometric representation of knowledge based on conceptual space. As a preliminary step to apply our proposal to mental disorder concepts, we started to develop an OWL ontology of the schizophrenia spectrum, which is as close as possible to the DSM-5 descriptions.",
"title": ""
},
{
"docid": "79833f074b2e06d5c56898ca3f008c00",
"text": "Regular expressions have served as the dominant workhorse of practical information extraction for several years. However, there has been little work on reducing the manual effort involved in building high-quality, complex regular expressions for information extraction tasks. In this paper, we propose ReLIE, a novel transformation-based algorithm for learning such complex regular expressions. We evaluate the performance of our algorithm on multiple datasets and compare it against the CRF algorithm. We show that ReLIE, in addition to being an order of magnitude faster, outperforms CRF under conditions of limited training data and cross-domain data. Finally, we show how the accuracy of CRF can be improved by using features extracted by ReLIE.",
"title": ""
},
{
"docid": "f2c6a7f205f1aa6b550418cd7e93f7d2",
"text": "This paper addresses the problem of a single rumor source detection with multiple observations, from a statistical point of view of a spreading over a network, based on the susceptible-infectious model. For tree networks, multiple sequential observations for one single instance of rumor spreading cannot improve over the initial snapshot observation. The situation dramatically improves for multiple independent observations. We propose a unified inference framework based on the union rumor centrality, and provide explicit detection performance for degree-regular tree networks. Surprisingly, even with merely two observations, the detection probability at least doubles that of a single observation, and further approaches one, i.e., reliable detection, with increasing degree. This indicates that a richer diversity enhances detectability. For general graphs, a detection algorithm using a breadth-first search strategy is also proposed and evaluated. Besides rumor source detection, our results can be used in network forensics to combat recurring epidemic-like information spreading such as online anomaly and fraudulent email spams.",
"title": ""
},
{
"docid": "ee045772d55000b6f2d3f7469a4161b1",
"text": "Although prior research has addressed the influence of corporate social responsibility (CSR) on perceived customer responses, it is not clear whether CSR affects market value of the firm. This study develops and tests a conceptual framework, which predicts that (1) customer satisfaction partially mediates the relationship between CSR and firm market value (i.e., Tobin’s q and stock return), (2) corporate abilities (innovativeness capability and product quality) moderate the financial returns to CSR, and (3) these moderated relationships are mediated by customer satisfaction. Based on a large-scale secondary dataset, the results show support for this framework. Interestingly, it is found that in firms with low innovativeness capability, CSR actually reduces customer satisfaction levels and, through the lowered satisfaction, harms market value. The uncovered mediated and asymmetrically moderated results offer important implications for marketing theory and practice. In today’s competitive market environment, corporate social responsibility (CSR) represents a high-profile notion that has strategic importance to many companies. As many as 90% of the Fortune 500 companies now have explicit CSR initiatives (Kotler and Lee 2004; Lichtenstein et al. 2004). According to a recent special report by BusinessWeek (2005a, p.72), large companies disclosed substantial investments in CSR initiatives (i.e., Target’s donation of $107.8 million in CSR represents 3.6% of its pretax profits, with GM $51.2 million at 2.7%, General Mills $60.3 million at 3.2%, Merck $921million at 11.3%, HCA $926 million at 43.3%). By dedicating everincreasing amounts to cash donations, in-kind contributions, cause marketing, and employee volunteerism programs, companies are acting on the premise that CSR is not merely the “right thing to do,” but also “the smart thing to do” (Smith 2003). Importantly, along with increasing media coverage of CSR issues, companies themselves are also taking direct and visible steps to communicate their CSR initiatives to various stakeholders including consumers. A decade ago, Drumwright (1996) observed that advertising with a social dimension was on the rise. The trend seems to continue. Many companies, including the likes of Target and Walmart, have funded large national ad campaigns promoting their good works. The October 2005 issue of In Style magazine alone carried more than 25 “cause” ads. Indeed, consumers seem to be taking notice: whereas in 1993 only 26% of individuals surveyed by Cone Communications could name a company as a strong corporate citizen, by 2004, the percentage surged to as high as 80% (BusinessWeek 2005a). Motivated, in part, by this mounting importance of CSR in practice, several marketing studies have found that social responsibility programs have a significant influence on a number of customer-related outcomes (Bhattacharya and Sen 2004). More specifically, based on lab experiments, CSR is reported to directly or indirectly impact consumer product responses",
"title": ""
},
{
"docid": "db529d673e5d1b337e98e05eeb35fcf4",
"text": "Human group activities detection in multi-camera CCTV surveillance videos is a pressing demand on smart surveillance. Previous works on this topic are mainly based on camera topology inference that is hard to apply to real-world unconstrained surveillance videos. In this paper, we propose a new approach for multi-camera group activities detection. Our approach simultaneously exploits intra-camera and inter-camera contexts without topology inference. Specifically, a discriminative graphical model with hidden variables is developed. The intra-camera and inter-camera contexts are characterized by the structure of hidden variables. By automatically optimizing the structure, the contexts are effectively explored. Furthermore, we propose a new spatiotemporal feature, named vigilant area (VA), to characterize the quantity and appearance of the motion in an area. This feature is effective for group activity representation and is easy to extract from a dynamic and crowded scene. We evaluate the proposed VA feature and discriminative graphical model extensively on two real-world multi-camera surveillance video data sets, including a public corpus consisting of 2.5 h of videos and a 468-h video collection, which, to the best of our knowledge, is the largest video collection ever used in human activity detection. The experimental results demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "326def5d55a8f45f9f1d85fd606588a9",
"text": "Visualization and situational awareness are of vital importance for power systems, as the earlier a power-system event such as a transmission line fault or cyber-attack is identified, the quicker operators can react to avoid unnecessary loss. Accurate time-synchronized data, such as system measurements and device status, provide benefits for system state monitoring. However, the time-domain analysis of such heterogeneous data to extract patterns is difficult due to the existence of transient phenomena in the analyzed measurement waveforms. This paper proposes a sequential pattern mining approach to accurately extract patterns of power-system disturbances and cyber-attacks from heterogeneous time-synchronized data, including synchrophasor measurements, relay logs, and network event monitor logs. The term common path is introduced. A common path is a sequence of critical system states in temporal order that represent individual types of disturbances and cyber-attacks. Common paths are unique signatures for each observed event type. They can be compared to observed system states for classification. In this paper, the process of automatically discovering common paths from labeled data logs is introduced. An included case study uses the common path-mining algorithm to learn common paths from a fusion of heterogeneous synchrophasor data and system logs for three types of disturbances (in terms of faults) and three types of cyber-attacks, which are similar to or mimic faults. The case study demonstrates the algorithm's effectiveness at identifying unique paths for each type of event and the accompanying classifier's ability to accurately discern each type of event.",
"title": ""
},
{
"docid": "bf5280b0c76ffe4b02976df1d2c1ec93",
"text": "5G Technology stands for Fifth Generation Mobile technology. From generation 1G to 2.5G and from 3G to 5G this world of telecommunication has seen a number of improvements along with improved performance with every passing day. Fifth generation network provide affordable broadband wireless connectivity (very high speed). The paper throws light on network architecture of fifth generation technology. Currently 5G term is not officially used. In fifth generation researches are being made on development of World Wide Wireless Web (WWWW), Dynamic Adhoc Wireless Networks (DAWN) and Real Wireless World. Fifth generation focus on (Voice over IP) VOIP-enabled devices that user will experience a high level of call volume and data transmission. Wire-less system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications and therefore has started research on fifth generation wireless systems that are expected to be deployed beyond 2020. In this article, we propose a potential cellular architecture that separates indoor and outdoor scenarios, and discuss various promising technologies for 5G wireless communication systems, such as massive MIMO, energy-efficient communications, cognitive radio networks, and visible light communications. The proposed network is enforced by nanotechnology, cloud computing and based on all IP Platform. The main features in 5G mobile network is that user can simultaneously connect to the multiple wireless technologies and can switch between them. This forthcoming mobile technology will support IPv6 and flat IP.",
"title": ""
},
{
"docid": "ecd486fabd206ad8c28ea9d9da8cd0ee",
"text": "The prevailing binding of SOAP to HTTP specifies that SOAP messages be encoded as an XML 1.0 document which is then sent between client and server. XML processing however can be slow and memory intensive, especially for scientific data, and consequently SOAP has been regarded as an inappropriate protocol for scientific data. Efficiency considerations thus lead to the prevailing practice of separating data from the SOAP control channel. Instead, it is stored in specialized binary formats and transmitted either via attachments or indirectly via a file sharing mechanism, such as GridFTP or HTTP. This separation invariably complicates development due to the multiple libraries and type systems to be handled; furthermore it suffers from performance issues, especially when handling small binary data. As an alternative solution, binary XML provides a highly efficient encoding scheme for binary data in the XML and SOAP messages, and with it we can gain high performance as well as unifying the development environment without unduly impacting the Web service protocol stack. In this paper we present our implementation of a generic SOAP engine that supports both textual XML and binary XML as the encoding scheme of the message. We also present our binary XML data model and encoding scheme. Our experiments show that for scientific applications binary XML together with the generic SOAP implementation not only ease development, but also provide better performance and are more widely applicable than the commonly used separated schemes",
"title": ""
},
{
"docid": "0e61015f3372ba177acdfcddbd0ffdfb",
"text": "INTRODUCTION\nThere are many challenges to the drug discovery process, including the complexity of the target, its interactions, and how these factors play a role in causing the disease. Traditionally, biophysics has been used for hit validation and chemical lead optimization. With its increased throughput and sensitivity, biophysics is now being applied earlier in this process to empower target characterization and hit finding. Areas covered: In this article, the authors provide an overview of how biophysics can be utilized to assess the quality of the reagents used in screening assays, to validate potential tool compounds, to test the integrity of screening assays, and to create follow-up strategies for compound characterization. They also briefly discuss the utilization of different biophysical methods in hit validation to help avoid the resource consuming pitfalls caused by the lack of hit overlap between biophysical methods. Expert opinion: The use of biophysics early on in the drug discovery process has proven crucial to identifying and characterizing targets of complex nature. It also has enabled the identification and classification of small molecules which interact in an allosteric or covalent manner with the target. By applying biophysics in this manner and at the early stages of this process, the chances of finding chemical leads with novel mechanisms of action are increased. In the future, focused screens with biophysics as a primary readout will become increasingly common.",
"title": ""
},
{
"docid": "90fdac33a73d1615db1af0c94016da5b",
"text": "AIM OF THE STUDY\nThe purpose of this study was to define antidiabetic effects of fruit of Vaccinium arctostaphylos L. (Ericaceae) which is traditionally used in Iran for improving of health status of diabetic patients.\n\n\nMATERIALS AND METHODS\nFirstly, we examined the effect of ethanolic extract of Vaccinium arctostaphylos fruit on postprandial blood glucose (PBG) after 1, 3, 5, 8, and 24h following a single dose administration of the extract to alloxan-diabetic male Wistar rats. Also oral glucose tolerance test was carried out. Secondly, PBG was measured at the end of 1, 2 and 3 weeks following 3 weeks daily administration of the extract. At the end of treatment period the pancreatic INS and cardiac GLUT-4 mRNA expression and also the changes in the plasma lipid profiles and antioxidant enzymes activities were assessed. Finally, we examined the inhibitory activity of the extract against rat intestinal α-glucosidase.\n\n\nRESULTS\nThe obtained results showed mild acute (18%) and also significant chronic (35%) decrease in the PBG, significant reduction in triglyceride (47%) and notable rising of the erythrocyte superoxide dismutase (57%), glutathione peroxidase (35%) and catalase (19%) activities due to treatment with the extract. Also we observed increased expression of GLUT-4 and INS genes in plant extract treated Wistar rats. Furthermore, in vitro studies displayed 47% and 56% inhibitory effects of the extract on activity of intestinal maltase and sucrase enzymes, respectively.\n\n\nCONCLUSIONS\nFindings of this study allow us to establish scientifically Vaccinium arctostaphylos fruit as a potent antidiabetic agent with antihyperglycemic, antioxidant and triglyceride lowering effects.",
"title": ""
},
{
"docid": "aaec79a58537f180aba451ea825ed013",
"text": "In my March 2006 CACM article I used the term \" computational thinking \" to articulate a vision that everyone, not just those who major in computer science, can benefit from thinking like a computer scientist [Wing06]. So, what is computational thinking? Here is a definition that Jan use; it is inspired by an email exchange I had with Al Aho of Columbia University: Computational Thinking is the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent [CunySnyderWing10] Informally, computational thinking describes the mental activity in formulating a problem to admit a computational solution. The solution can be carried out by a human or machine, or more generally, by combinations of humans and machines. When I use the term computational thinking, my interpretation of the words \" problem \" and \" solution \" is broad; in particular, I mean not just mathematically well-defined problems whose solutions are completely analyzable, e.g., a proof, an algorithm, or a program, but also real-world problems whose solutions might be in the form of large, complex software systems. Thus, computational thinking overlaps with logical thinking and systems thinking. It includes algorithmic thinking and parallel thinking, which in turn engage other kinds of thought processes, e.g., compositional reasoning, pattern matching, procedural thinking, and recursive thinking. Computational thinking is used in the design and analysis of problems and their solutions, broadly interpreted. The most important and high-level thought process in computational thinking is the abstraction process. Abstraction is used in defining patterns, generalizing from instances, and parameterization. It is used to let one object stand for many. It is used to capture essential properties common to a set of objects while hiding irrelevant distinctions among them. For example, an algorithm is an abstraction of a process that takes inputs, executes a sequence of steps, and produces outputs to satisfy a desired goal. An abstract data type defines an abstract set of values and operations for manipulating those values, hiding the actual representation of the values from the user of the abstract data type. Designing efficient algorithms inherently involves designing abstract data types. Abstraction gives us the power to scale and deal with complexity. Recursively applying abstraction gives us the ability to build larger and larger systems, with the base case (at least for computer science) being bits (0's …",
"title": ""
},
{
"docid": "36db2c06d65576e03e00017a9060fd24",
"text": "Real-world relations among entities can oen be observed and determined by different perspectives/views. For example, the decision made by a user on whether to adopt an item relies on multiple aspects such as the contextual information of the decision, the item’s aributes, the user’s profile and the reviews given by other users. Different views may exhibit multi-way interactions among entities and provide complementary information. In this paper, we introduce a multi-tensor-based approach that can preserve the underlying structure of multi-view data in a generic predictive model. Specifically, we propose structural factorization machines (SFMs) that learn the common latent spaces shared by multi-view tensors and automatically adjust the importance of each view in the predictive model. Furthermore, the complexity of SFMs is linear in the number of parameters, which make SFMs suitable to large-scale problems. Extensive experiments on real-world datasets demonstrate that the proposed SFMs outperform several state-of-the-art methods in terms of prediction accuracy and computational cost. CCS CONCEPTS •Computingmethodologies→Machine learning; Supervised learning; Factorization methods;",
"title": ""
},
{
"docid": "ef8a61d3ff3aad461c57fe893e0b5bb6",
"text": "In this paper, we propose an underwater wireless sensor network (UWSN) named SOUNET where sensor nodes form and maintain a tree-topological network for data gathering in a self-organized manner. After network topology discovery via packet flooding, the sensor nodes consistently update their parent node to ensure the best connectivity by referring to the timevarying neighbor tables. Such a persistent and self-adaptive method leads to high network connectivity without any centralized control, even when sensor nodes are added or unexpectedly lost. Furthermore, malfunctions that frequently happen in self-organized networks such as node isolation and closed loop are resolved in a simple way. Simulation results show that SOUNET outperforms other conventional schemes in terms of network connectivity, packet delivery ratio (PDR), and energy consumption throughout the network. In addition, we performed an experiment at the Gyeongcheon Lake in Korea using commercial underwater modems to verify that SOUNET works well in a real environment.",
"title": ""
}
] |
scidocsrr
|
4687cd2a6862f7117cf2f2e4ab39ed9e
|
A storm is Coming: A Modern Probabilistic Model Checker
|
[
{
"docid": "f8d256bf6fea179847bfb4cc8acd986d",
"text": "We present a logic for stating properties such as, “after a request for service there is at least a 98% probability that the service will be carried out within 2 seconds”. The logic extends the temporal logic CTL by Emerson, Clarke and Sistla with time and probabilities. Formulas are interpreted over discrete time Markov chains. We give algorithms for checking that a given Markov chain satisfies a formula in the logic. The algorithms require a polynomial number of arithmetic operations, in size of both the formula and the Markov chain. A simple example is included to illustrate the algorithms.",
"title": ""
},
{
"docid": "ec77f2b919118a8478675b1eaa21606f",
"text": "Given a parametric Markov model, we consider the problem of computing the rational function expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression is computed. Afterwards, this expression is evaluated to a closed form function representing the reachability probability. This paper investigates how this idea can be turned into an effective procedure. It turns out that the bottleneck lies in the growth of the regular expression relative to the number of states (n Θ(log n)). We therefore proceed differently, by tightly intertwining the regular expression computation with its evaluation. This allows us to arrive at an effective method that avoids this blow up in most practical cases. We give a detailed account of the approach, also extending to parametric models with rewards and with non-determinism. Experimental evidence is provided, illustrating that our implementation provides meaningful insights on non-trivial models.",
"title": ""
}
] |
[
{
"docid": "fb05042ac52f448d9c7d3f820df4b790",
"text": "Protein gamma-turn prediction is useful in protein function studies and experimental design. Several methods for gamma-turn prediction have been developed, but the results were unsatisfactory with Matthew correlation coefficients (MCC) around 0.2–0.4. Hence, it is worthwhile exploring new methods for the prediction. A cutting-edge deep neural network, named Capsule Network (CapsuleNet), provides a new opportunity for gamma-turn prediction. Even when the number of input samples is relatively small, the capsules from CapsuleNet are effective to extract high-level features for classification tasks. Here, we propose a deep inception capsule network for gamma-turn prediction. Its performance on the gamma-turn benchmark GT320 achieved an MCC of 0.45, which significantly outperformed the previous best method with an MCC of 0.38. This is the first gamma-turn prediction method utilizing deep neural networks. Also, to our knowledge, it is the first published bioinformatics application utilizing capsule network, which will provide a useful example for the community. Executable and source code can be download at http://dslsrv8.cs.missouri.edu/~cf797/MUFoldGammaTurn/download.html.",
"title": ""
},
{
"docid": "9fdaddce26965be59f9d46d06fa0296a",
"text": "Using emotion detection technologies from biophysical signals, this study explored how emotion evolves during learning process and how emotion feedback could be used to improve learning experiences. This article also described a cutting-edge pervasive e-Learning platform used in a Shanghai online college and proposed an affective e-Learning model, which combined learners’ emotions with the Shanghai e-Learning platform. The study was guided by Russell’s circumplex model of affect and Kort’s learning spiral model. The results about emotion recognition from physiological signals achieved a best-case accuracy (86.3%) for four types of learning emotions. And results from emotion revolution study showed that engagement and confusion were the most important and frequently occurred emotions in learning, which is consistent with the findings from AutoTutor project. No evidence from this study validated Kort’s learning spiral model. An experimental prototype of the affective e-Learning model was built to help improve students’ learning experience by customizing learning material delivery based on students’ emotional state. Experiments indicated the superiority of emotion aware over non-emotion-aware with a performance increase of 91%.",
"title": ""
},
{
"docid": "f6e8eda4fa898a24f3a7d1116e49f42c",
"text": "This is the eBook of the printed book and may not include any media, website access codes, or print supplements that may come packaged with the bound book. Search Engines: Information Retrieval in Practice is ideal for introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. It is also a valuable tool for search engine and information retrieval professionals. В Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice , is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines.В Coverage of the underlying IR and mathematical models reinforce key concepts. The bookвЂTMs numerous programming exercises make extensive use of Galago, a Java-based open source search engine.",
"title": ""
},
{
"docid": "e68d85f01c929a198c9da9672f23ef45",
"text": "Despite recent emergence of video caption methods, how to generate fine-grained video descriptions (i.e., long and detailed commentary about individual movements of multiple subjects as well as their frequent interactions) is far from being solved, which however has great applications such as automatic sports narrative. To this end, this work makes the following contributions. First, to facilitate this novel research of fine-grained video caption, we collected a novel dataset called Fine-grained Sports Narrative dataset (FSN) that contains 2K sports videos with ground-truth narratives from YouTube.com. Second, we develop a novel performance evaluation metric named Fine-grained Captioning Evaluation (FCE) to cope with this novel task. Considered as an extension of the widely used METEOR, it measures not only the linguistic performance but also whether the action details and their temporal orders are correctly described. Third, we propose a new framework for fine-grained sports narrative task. This network features three branches: 1) a spatio-temporal entity localization and role discovering sub-network; 2) a fine-grained action modeling sub-network for local skeleton motion description; and 3) a group relationship modeling sub-network to model interactions between players. We further fuse the features and decode them into long narratives by a hierarchically recurrent structure. Extensive experiments on the FSN dataset demonstrates the validity of the proposed framework for fine-grained video caption.",
"title": ""
},
{
"docid": "bddec3337cfbc17412b042b58e1cdfeb",
"text": "Business organisations are constantly looking for ways to gain an advantage over their competitors (Beyleveld & Schurink, 2005; Castaneda & Toulson, 2013). Historically, their focus was on producing as much as possible without considering exact demand (Turner & Chung, 2005). Recently, businesses have embarked upon finding more efficient ways to deal with large turnovers (Umble, Haft & Umble, 2003). One way of achieving this is by employing an Enterprise Resource Planning (ERP) system. An ERP system is a mandatory, integrated, customised, packaged software-based system that handles most of the system requirements in all business operational functions such as finance, human resources, manufacturing, sales and marketing (Wua, Onga & Hsub, 2008). Although expectations from ERP systems are high, these systems have not always led to significant organisational enhancement (Soh, Kien & Tay-Yap, 2000) and most ERP projects turn out to be over budget, not on time and unsuccessful (Abugabah & Sanzogni, 2010; Hong & Kim, 2002; Kumar, Maheshwari & Kumar, 2003).",
"title": ""
},
{
"docid": "33df3da22e9a24767c68e022bb31bbe5",
"text": "The credit card industry has been growing rapidly recently, and thus huge numbers of consumers’ credit data are collected by the credit department of the bank. The credit scoring manager often evaluates the consumer’s credit with intuitive experience. However, with the support of the credit classification model, the manager can accurately evaluate the applicant’s credit score. Support Vector Machine (SVM) classification is currently an active research area and successfully solves classification problems in many domains. This study used three strategies to construct the hybrid SVM-based credit scoring models to evaluate the applicant’s credit score from the applicant’s input features. Two credit datasets in UCI database are selected as the experimental data to demonstrate the accuracy of the SVM classifier. Compared with neural networks, genetic programming, and decision tree classifiers, the SVM classifier achieved an identical classificatory accuracy with relatively few input features. Additionally, combining genetic algorithms with SVM classifier, the proposed hybrid GA-SVM strategy can simultaneously perform feature selection task and model parameters optimization. Experimental results show that SVM is a promising addition to the existing data mining methods. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f886f9ff8281b6ad34af111a06834c43",
"text": "Brain waves can aptly define the state of a person's mind. High activity and attention lead to dominant beta waves while relaxation and focus lead to dominant alpha waves in the brain. Alpha state of mind is ideal for learning and memory retention. In our experiment we aim to increase alpha waves and decrease beta waves in a person with the help of music to measure improvement in memory retention. Our hypothesis is that, when a person listens to music which causes relaxation, he is more likely to attain the alpha state of mind and enhance his memory retention ability. To verify this hypothesis, we conducted an experiment on 5 participants. The participants were asked to take a similar quiz twice, under different states of mind. During the experimentation process, the brain activity of the participants was recorded and analyzed using MUSE, an off-the-shelf device for brainwave capturing and analysis.",
"title": ""
},
{
"docid": "a5aa074c27add29fd038a83f02582fd1",
"text": "We develop an efficient general-purpose blind/no-reference image quality assessment (IQA) algorithm using a natural scene statistics (NSS) model of discrete cosine transform (DCT) coefficients. The algorithm is computationally appealing, given the availability of platforms optimized for DCT computation. The approach relies on a simple Bayesian inference model to predict image quality scores given certain extracted features. The features are based on an NSS model of the image DCT coefficients. The estimated parameters of the model are utilized to form features that are indicative of perceptual quality. These features are used in a simple Bayesian inference approach to predict quality scores. The resulting algorithm, which we name BLIINDS-II, requires minimal training and adopts a simple probabilistic model for score prediction. Given the extracted features from a test image, the quality score that maximizes the probability of the empirically determined inference model is chosen as the predicted quality score of that image. When tested on the LIVE IQA database, BLIINDS-II is shown to correlate highly with human judgments of quality, at a level that is competitive with the popular SSIM index.",
"title": ""
},
{
"docid": "b4f9d31073db42595c84fc7f4465f0e4",
"text": "We perform a game theoretic investigation of the effects of deception on the in teractions between an attacker and a defender of a computer network. The defend er can employ camouflage by either disguising a normal system as a honeypot, or by disguisin a honeypot as a normal system. We model the interactions between defender and attacker u sing a signaling game, a non-cooperative two player dynamic game of incomplete information. F or this model, we determine which strategies admit perfect Bayesian equilibria. These equ ilibria are refined Nash equilibria in which neither the defender nor the attacker will unilaterally c hoose to deviate from their strategies. We discuss the benefits of employing deceptive eq uilibrium strategies in the defense of a computer network.",
"title": ""
},
{
"docid": "6bdb8048915000b2d6c062e0e71b8417",
"text": "Depressive disorders are the most typical disease affecting many different factors of humanity. University students may be at increased risk of depression owing to the pressure and stress they encounter. Therefore, the purpose of this study is comparing the level of depression among male and female athletes and non-athletes undergraduate student of private university in Esfahan, Iran. The participants in this research are composed of 400 male and female athletes as well as no-athletes Iranian undergraduate students. The Beck depression test (BDI) was employed to measure the degree of depression. T-test was used to evaluate the distinction between athletes and non-athletes at P≤0.05. The ANOVA was conducted to examine whether there was a relationship between level of depression among non-athletes and athletes. The result showed that the prevalence rate of depression among non-athlete male undergraduate students is significantly higher than that of athlete male students. The results also presented that level of depression among female students is much more frequent compared to males. This can be due to the fatigue and lack of energy that are more frequent among female in comparison to the male students. Physical activity was negatively related to the level of depression by severity among male and female undergraduate students. However, there is no distinct relationship between physical activity and level of depression according to the age of athlete and nonathlete male and female undergraduate students. This study has essential implications for clinical psychology due to the relationship between physical activity and prevalence of depression.",
"title": ""
},
{
"docid": "07c34b068cc1217de2e623122a22d2b0",
"text": "Rheumatoid arthritis (RA) is a bone destructive autoimmune disease. Many patients with RA recognize fluctuations of their joint synovitis according to changes of air pressure, but the correlations between them have never been addressed in large-scale association studies. To address this point we recruited large-scale assessments of RA activity in a Japanese population, and performed an association analysis. Here, a total of 23,064 assessments of RA activity from 2,131 patients were obtained from the KURAMA (Kyoto University Rheumatoid Arthritis Management Alliance) database. Detailed correlations between air pressure and joint swelling or tenderness were analyzed separately for each of the 326 patients with more than 20 assessments to regulate intra-patient correlations. Association studies were also performed for seven consecutive days to identify the strongest correlations. Standardized multiple linear regression analysis was performed to evaluate independent influences from other meteorological factors. As a result, components of composite measures for RA disease activity revealed suggestive negative associations with air pressure. The 326 patients displayed significant negative mean correlations between air pressure and swellings or the sum of swellings and tenderness (p = 0.00068 and 0.00011, respectively). Among the seven consecutive days, the most significant mean negative correlations were observed for air pressure three days before evaluations of RA synovitis (p = 1.7 × 10(-7), 0.00027, and 8.3 × 10(-8), for swellings, tenderness and the sum of them, respectively). Standardized multiple linear regression analysis revealed these associations were independent from humidity and temperature. Our findings suggest that air pressure is inversely associated with synovitis in patients with RA.",
"title": ""
},
{
"docid": "458e4b5196805b608e15ee9c566123c9",
"text": "For the first half century of animal virology, the major problem was lack of a simple method for quantitating infectious virus particles; the only method available at that time was some form or other of the serial-dilution end-point method in animals, all of which were both slow and expensive. Cloned cultured animal cells, which began to be available around 1950, provided Dulbecco with a new approach. He adapted the technique developed by Emory Ellis and Max Delbrück for assaying bacteriophage, that is, seeding serial dilutions of a given virus population onto a confluent lawn of host cells, to the measurement of Western equine encephalitis virus, and demonstrated that it also formed easily countable plaques in monolayers of chick embryo fibroblasts. The impact of this finding was enormous; animal virologists had been waiting for such a technique for decades. It was immediately found to be widely applicable to many types of cells and most viruses, gained quick acceptance, and is widely regarded as marking the beginning of molecular animal virology. Renato Dulbecco was awarded the Nobel Prize in 1975. W. K. JOKLIK",
"title": ""
},
{
"docid": "88478e315049f2c155bb611d797e8eb1",
"text": "In this paper we analyze aspects of the intellectual property strategies of firms in the global cosmetics and toilet preparations industry. Using detailed data on all 4,205 EPO patent grants in the relevant IPC class between 1980 and 2001, we find that about 15 percent of all patents are challenged in EPO opposition proceedings, a rate about twice as high as in the overall population of EPO patents. Moreover, opposition in this sector is more frequent than in chemicals-based high technology industries such as biotechnology and pharmaceuticals. About one third of the opposition cases involve multiple opponents. We search for rationales that could explain this surprisingly strong “IP litigation” activity. In a first step, we use simple probability models to analyze the likelihood of opposition as a function of characteristics of the attacked patent. We then introduce owner firm variables and find that major differences across firms in the likelihood of having their patents opposed prevail even after accounting for other influences. Aggressive opposition in the past appears to be associated with a reduction of attacks on own patents. In future work we will look at the determinants of outcomes and duration of these oppositions, in an attempt to understand the firms’ strategies more fully. Acknowledgements This version of the paper was prepared for presentation at the Productivity Program meetingsof the NBER Summer Institute. An earlier version of the paper was presented in February 2002 at the University of Maastricht Workshop on Strategic Management, Innovation and Econometrics, held at Chateau St. Gerlach, Valkenburg. We would like to thank the participants and in particular Franz Palm and John Hagedoorn for their helpful comments.",
"title": ""
},
{
"docid": "68865e653e94d3366961434cc012363f",
"text": "Solving the problem of consciousness remains one of the biggest challenges in modern science. One key step towards understanding consciousness is to empirically narrow down neural processes associated with the subjective experience of a particular content. To unravel these neural correlates of consciousness (NCC) a common scientific strategy is to compare perceptual conditions in which consciousness of a particular content is present with those in which it is absent, and to determine differences in measures of brain activity (the so called \"contrastive analysis\"). However, this comparison appears not to reveal exclusively the NCC, as the NCC proper can be confounded with prerequisites for and consequences of conscious processing of the particular content. This implies that previous results cannot be unequivocally interpreted as reflecting the neural correlates of conscious experience. Here we review evidence supporting this conjecture and suggest experimental strategies to untangle the NCC from the prerequisites and consequences of conscious experience in order to further develop the otherwise valid and valuable contrastive methodology.",
"title": ""
},
{
"docid": "77a247205e5dc5de0d179b8313adfc9d",
"text": "Social media such as tweets are emerging as platforms contributing to situational awareness during disasters. Information shared on Twitter by both affected population (e.g., requesting assistance, warning) and those outside the impact zone (e.g., providing assistance) would help first responders, decision makers, and the public to understand the situation first-hand. Effective use of such information requires timely selection and analysis of tweets that are relevant to a particular disaster. Even though abundant tweets are promising as a data source, it is challenging to automatically identify relevant messages since tweet are short and unstructured, resulting to unsatisfactory classification performance of conventional learning-based approaches. Thus, we propose a simple yet effective algorithm to identify relevant messages based on matching keywords and hashtags, and provide a comparison between matching-based and learning-based approaches. To evaluate the two approaches, we put them into a framework specifically proposed for analyzing diaster-related tweets. Analysis results on eleven datasets with various disaster types show that our technique provides relevant tweets of higher quality and more interpretable results of sentiment analysis tasks when compared to learning approach.",
"title": ""
},
{
"docid": "4b90fefa981e091ac6a5d2fd83e98b66",
"text": "This paper explores an analysis-aware data cleaning architecture for a large class of SPJ SQL queries. In particular, we propose QuERy, a novel framework for integrating entity resolution (ER) with query processing. The aim of QuERy is to correctly and efficiently answer complex queries issued on top of dirty data. The comprehensive empirical evaluation of the proposed solution demonstrates its significant advantage in terms of efficiency over the traditional techniques for the given problem settings.",
"title": ""
},
{
"docid": "809aed520d0023535fec644e81ddbb53",
"text": "This paper presents an efficient image denoising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). For a better preservation of image local structures, a pixel and its nearest neighbors are modeled as a vector variable, whose training samples are selected from the local window by using block matching based LPG. Such an LPG procedure guarantees that only the sample blocks with similar contents are used in the local statistics calculation for PCA transform estimation, so that the image local features can be well preserved after coefficient shrinkage in the PCA domain to remove the noise. The LPG-PCA denoising procedure is iterated one more time to further improve the denoising performance, and the noise level is adaptively adjusted in the second stage. Experimental results on benchmark test images demonstrate that the LPG-PCA method achieves very competitive denoising performance, especially in image fine structure preservation, compared with state-of-the-art denoising algorithms. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "85e3992ff97ae284218cf47dcb57abec",
"text": "Software has been part of modern society for more than 50 years. There are several software development methodologies in use today. Some companies have their own customized methodology for developing their software but the majority speaks about two kinds of methodologies: heavyweight and lightweight. Heavyweight methodologies, also considered as the traditional way to develop software, claim their support to comprehensive planning, detailed documentation, and expansive design. The lightweight methodologies, also known as agile modeling, have gained significant attention from the software engineering community in the last few years. Unlike traditional methods, agile methodologies employ short iterative cycles, and rely on tacit knowledge within a team as opposed to documentation. In this dissertation, I have described the characteristics of some traditional and agile methodologies that are widely used in software development. I have also discussed the strengths and weakness between the two opposing methodologies and provided the challenges associated with implementing agile processes in the software industry. This anecdotal evidence is rising regarding the effectiveness of agile methodologies in certain environments; but there have not been much collection and analysis of empirical evidence for agile projects. However, to support my dissertation I conducted a questionnaire, soliciting feedback from software industry practitioners to evaluate which methodology has a better success rate for different sizes of software development. According to our findings agile methodologies can provide good benefits for small scaled and medium scaled projects but for large scaled projects traditional methods seem dominant.",
"title": ""
},
{
"docid": "a34825f20b645a146857c1544c08e66e",
"text": "1. The midterm will have about 5-6 long questions, and about 8-10 short questions. Space will be provided on the actual midterm for you to write your answers. 2. The midterm is meant to be educational, and as such some questions could be quite challenging. Use your time wisely to answer as much as you can! 3. For additional practice, please see CS 229 extra problem sets available at 1. [13 points] Generalized Linear Models Recall that generalized linear models assume that the response variable y (conditioned on x) is distributed according to a member of the exponential family: p(y; η) = b(y) exp(ηT (y) − a(η)), where η = θ T x. For this problem, we will assume η ∈ R. (a) [10 points] Given a training set {(x (i) , y (i))} m i=1 , the loglikelihood is given by (θ) = m i=1 log p(y (i) | x (i) ; θ). Give a set of conditions on b(y), T (y), and a(η) which ensure that the loglikelihood is a concave function of θ (and thus has a unique maximum). Your conditions must be reasonable, and should be as weak as possible. (E.g., the answer \" any b(y), T (y), and a(η) so that (θ) is concave \" is not reasonable. Similarly, overly narrow conditions, including ones that apply only to specific GLMs, are also not reasonable.) (b) [3 points] When the response variable is distributed according to a Normal distribution (with unit variance), we have b(y) = 1 √ 2π e −y 2 2 , T (y) = y, and a(η) = η 2 2. Verify that the condition(s) you gave in part (a) hold for this setting.",
"title": ""
},
{
"docid": "e2a5a97b60e01ac4ff6367989ff89756",
"text": "This paper presents a half-select free 9T SRAM to facilitate reliable SRAM operation in the near-threshold voltage region. In the proposed SRAM, the half-select disturbance, which results in instable operations in 6T SRAM cell, can be completely eliminated by adopting cross-access selection of row and column word-lines. To minimize the area overhead of the half-select free 9T SRAM cell, a bit-line and access transistors between the adjacent cells are shared using a symmetric shared node that connects two cells. In addition, a selective pre-charge scheme considering the preferably isolated unselected cells has also been proposed to reduce the dynamic power consumption. The simulation results with the most probable failure point method show that the proposed 9T SRAM cell has a minimum operating voltage (VMIN) of 0.45 V among the half-select free SRAM cells. The test chip with 65-nm CMOS technology shows that the proposed 9T SRAM is fully operated at 0.35 V and 25 °C condition. Under the supply voltages between 0.35 and 1.1 V, the 4-kb SRAM macro is operated between 640 kHz and 560 MHz, respectively. The proposed 9T SRAM shows the best voltage scalability without any assist circuit while maintaining small macro area and fast operation frequency.",
"title": ""
}
] |
scidocsrr
|
14de23083ae6c10c32709e4853abe147
|
Explicit Modeling of Human-Object Interactions in Realistic Videos
|
[
{
"docid": "b9a893fb526955b5131860a1402e2f7c",
"text": "A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.",
"title": ""
}
] |
[
{
"docid": "f38ad855c66a43529d268b81c9ea4c69",
"text": "In the recent years, countless security concerns related to automotive systems were revealed either by academic research or real life attacks. While current attention was largely focused on passenger cars, due to their ubiquity, the reported bus-related vulnerabilities are applicable to all industry sectors where the same bus technology is deployed, i.e., the CAN bus. The SAE J1939 specification extends and standardizes the use of CAN to commercial vehicles where security plays an even higher role. In contrast to empirical results that attest such vulnerabilities in commercial vehicles by practical experiments, here, we determine that existing shortcomings in the SAE J1939 specifications open road to several new attacks, e.g., impersonation, denial of service (DoS), distributed DoS, etc. Taking the advantage of an industry-standard CANoe based simulation, we demonstrate attacks with potential safety critical effects that are mounted while still conforming to the SAE J1939 standard specification. We discuss countermeasures and security enhancements by including message authentication mechanisms. Finally, we evaluate and discuss the impact of employing these mechanisms on the overall network communication.",
"title": ""
},
{
"docid": "65e273d046a8120532d8cd04bcadca56",
"text": "This paper explores the relationship between domain scheduling in avirtual machine monitor (VMM) and I/O performance. Traditionally, VMM schedulers have focused on fairly sharing the processor resources among domains while leaving the scheduling of I/O resources as asecondary concern. However, this can resultin poor and/or unpredictable application performance, making virtualization less desirable for applications that require efficient and consistent I/O behavior.\n This paper is the first to study the impact of the VMM scheduler on performance using multiple guest domains concurrently running different types of applications. In particular, different combinations of processor-intensive, bandwidth-intensive, andlatency-sensitive applications are run concurrently to quantify the impacts of different scheduler configurations on processor and I/O performance. These applications are evaluated on 11 different scheduler configurations within the Xen VMM. These configurations include a variety of scheduler extensions aimed at improving I/O performance. This cross product of scheduler configurations and application types offers insight into the key problems in VMM scheduling for I/O and motivates future innovation in this area.",
"title": ""
},
{
"docid": "2beabe7d2756fea530172943b9e374e7",
"text": "Migraine treatment has evolved from the realms of the supernatural into the scientific arena, but it seems still controversial whether migraine is primarily a vascular or a neurological dysfunction. Irrespective of this controversy, the levels of serotonin (5-hydroxytryptamine; 5-HT), a vasoconstrictor and a central neurotransmitter, seem to decrease during migraine (with associated carotid vasodilatation) whereas an i.v. infusion of 5-HT can abort migraine. In fact, 5-HT as well as ergotamine, dihydroergotamine and other antimigraine agents invariably produce vasoconstriction in the external carotid circulation. The last decade has witnessed the advent of sumatriptan and second generation triptans (e.g. zolmitriptan, rizatriptan, naratriptan), which belong to a new class of drugs, now known as 5-HT1B/1D/1F receptor agonists. Compared to sumatriptan, the second-generation triptans have a higher oral bioavailability and longer plasma half-life. In line with the vascular and neurogenic theories of migraine, all triptans produce selective carotid vasoconstriction (via 5-HT1B receptors) and presynaptic inhibition of the trigeminovascular inflammatory responses implicated in migraine (via 5-HT1D/5-ht1F receptors). Moreover, selective agonists at 5-HT1D (PNU-142633) and 5-ht1F (LY344864) receptors inhibit the trigeminovascular system without producing vasoconstriction. Nevertheless, PNU-142633 proved to be ineffective in the acute treatment of migraine, whilst LY344864 did show some efficacy when used in doses which interact with 5-HT1B receptors. Finally, although the triptans are effective antimigraine agents producing selective cranial vasoconstriction, efforts are being made to develop other effective antimigraine alternatives acting via the direct blockade of vasodilator mechanisms (e.g. antagonists at CGRP receptors, antagonists at 5-HT7 receptors, inhibitors of nitric oxide biosynthesis, etc). These alternatives will hopefully lead to fewer side-effects.",
"title": ""
},
{
"docid": "732dbb2f505ecccf46097f3022770811",
"text": "In this paper we describe and share with the research community, a significant smartphone dataset obtained from an ongoing long-term data collection experiment. The dataset currently contains 10 billion data records from 30 users collected over a period of 1.6 years and an additional 20 users for 6 months (totaling 50 active users currently participating in the experiment).\n The experiment involves two smartphone agents: SherLock and Moriarty. SherLock collects a wide variety of software and sensor data at a high sample rate. Moriarty perpetrates various attacks on the user and logs its activities, thus providing labels for the SherLock dataset.\n The primary purpose of the dataset is to help security professionals and academic researchers in developing innovative methods of implicitly detecting malicious behavior in smartphones. Specifically, from data obtainable without superuser (root) privileges. To demonstrate possible uses of the dataset, we perform a basic malware analysis and evaluate a method of continuous user authentication.",
"title": ""
},
{
"docid": "0c6242d71bb9c4e4df48d1a6672590d3",
"text": "This review article is a continuation of the paper “Hepatitis B core particles as a universal display model: a structure-function basis for development” written by Pumpens P. and Grens E., ordered by Professor Lev Kisselev and published in FEBS Letters, 1999, 442, 1–6. The past 17 years have strengthened the paper’s finding that the human hepatitis B virus core protein, along with other Hepadnaviridae family member core proteins, is a mysterious, multifunctional protein. The core gene of the Hepadnaviridae genome encodes five partially collinear proteins. The most important of these is the HBV core protein p21, or HBc. It can self-assemble by forming viral HBc particles, but also plays a crucial role in the regulation of viral replication. Since 1986, the HBc protein has been one of the first and the most successful tools of the virus-like particle (VLP) technology. Later, the woodchuck hepatitis virus core protein (WHc) was also used as a VLP carrier. The Hepadnaviridae core proteins remain favourite VLP candidates for the knowledge-based design of future vaccines, gene therapy vectors, specifically targeted nanocontainers, and other modern nanotechnological tools for prospective medical use.",
"title": ""
},
{
"docid": "71b265e5aceb2e7b2b837deba7fd7d08",
"text": "We address the problem of code search in executables. Given a function in binary form and a large code base, our goal is to statically find similar functions in the code base. Towards this end, we present a novel technique for computing similarity between functions. Our notion of similarity is based on decomposition of functions into tracelets: continuous, short, partial traces of an execution. To establish tracelet similarity in the face of low-level compiler transformations, we employ a simple rewriting engine. This engine uses constraint solving over alignment constraints and data dependencies to match registers and memory addresses between tracelets, bridging the gap between tracelets that are otherwise similar. We have implemented our approach and applied it to find matches in over a million binary functions. We compare tracelet matching to approaches based on n-grams and graphlets and show that tracelet matching obtains dramatically better precision and recall.",
"title": ""
},
{
"docid": "077e4307caf9ac3c1f9185f0eaf58524",
"text": "Many text mining tools cannot be applied directly to documents available on web pages. There are tools for fetching and preprocessing of textual data, but combining them in one working tool chain can be time consuming. The preprocessing task is even more labor-intensive if documents are located on multiple remote sources with different storage formats. In this paper we propose the simplification of data preparation process for cases when data come from wide range of web resources. We developed an open-sourced tool, called Kayur, that greatly minimizes time and effort required for routine data preprocessing steps, allowing to quickly proceed to the main task of data analysis. The datasets generated by the tool are ready to be loaded into a data mining workbench, such as WEKA or Carrot2, to perform classification, feature prediction, and other data mining tasks.",
"title": ""
},
{
"docid": "a98486ae2b434ed5b6c6c866dae2e15a",
"text": "A large number of algorithms have been proposed for feature subset selection. Our experimental results show that the sequential forward oating selection (SFFS) algorithm, proposed by Pudil et al., dominates the other algorithms tested. We study the problem of choosing an optimal feature set for land use classi cation based on SAR satellite images using four di erent texture models. Pooling features derived from di erent texture models, followed by a feature selection results in a substantial improvement in the classi cation accuracy. We also illustrate the dangers of using feature selection in small sample size situations.",
"title": ""
},
{
"docid": "8641df504b9f8c55c1951294e47875e4",
"text": "3D ultrasound (US) acquisition acquires volumetric images, thus alleviating a classical US imaging bottleneck that requires a highly-trained sonographer to operate the US probe. However, this opportunity has not been explored in practice, since 3D US machines are only suitable for hospital usage in terms of cost, size and power requirements. In this work we propose the first fully-digital, single-chip 3D US imager on FPGA. The proposed design is a complete processing pipeline that includes pre-processing, image reconstruction, and post-processing. It supports up to 1024 input channels, which matches or exceeds state of the art, in an unprecedented estimated power budget of 6.1 W. The imager exploits a highly scalable architecture which can be either downscaled for 2D imaging, or further upscaled on a larger FPGA. Our platform supports both real-time inputs over an optical cable, or test data feeds sent by a laptop running Matlab and custom tools over an Ethernet connection. Additionally, the design allows HDMI video output on a screen.",
"title": ""
},
{
"docid": "01875eeb7da3676f46dd9d3f8bf3ecac",
"text": "It is shown that a certain tour of 49 cities, one in each of the 48 states and Washington, D C , has the shortest road distance T HE TRAVELING-SALESMAN PROBLEM might be described as follows: Find the shortest route (tour) for a salesman starting from a given city, visiting each of a specified group of cities, and then returning to the original point of departure. More generally, given an n by n symmetric matrix D={d,j), where du represents the 'distance' from / to J, arrange the points in a cyclic order in such a way that the sum of the du between consecutive points is minimal. Since there are only a finite number of possibilities (at most 3>' 2 (« —1)0 to consider, the problem is to devise a method of picking out the optimal arrangement which is reasonably efficient for fairly large values of n. Although algorithms have been devised for problems of similar nature, e.g., the optimal assignment problem,''** little is known about the traveling-salesman problem. We do not claim that this note alters the situation very much; what we shall do is outline a way of approaching the problem that sometimes, at least, enables one to find an optimal path and prove it so. In particular, it will be shown that a certain arrangement of 49 cities, one m each of the 48 states and Washington, D. C, is best, the du used representing road distances as taken from an atlas. * HISTORICAL NOTE-The origin of this problem is somewhat obscure. It appears to have been discussed informally among mathematicians at mathematics meetings for many years. Surprisingly little in the way of results has appeared in the mathematical literature.'\" It may be that the minimal-distance tour problem was stimulated by the so-called Hamiltonian game' which is concerned with finding the number of different tours possible over a specified network The latter problem is cited by some as the origin of group theory and has some connections with the famou8 Four-Color Conjecture ' Merrill Flood (Columbia Universitj') should be credited with stimulating interest in the traveling-salesman problem in many quarters. As early as 1937, he tried to obtain near optimal solutions in reference to routing of school buses. Both Flood and A W. Tucker (Princeton University) recall that they heard about the problem first in a seminar talk by Hassler Whitney at Princeton in 1934 (although Whitney, …",
"title": ""
},
{
"docid": "9c533c7059640ef502a75df36d310a91",
"text": "Reference phylogenies are crucial for providing a taxonomic framework for interpretation of marker gene and metagenomic surveys, which continue to reveal novel species at a remarkable rate. Greengenes is a dedicated full-length 16S rRNA gene database that provides users with a curated taxonomy based on de novo tree inference. We developed a ‘taxonomy to tree’ approach for transferring group names from an existing taxonomy to a tree topology, and used it to apply the Greengenes, National Center for Biotechnology Information (NCBI) and cyanoDB (Cyanobacteria only) taxonomies to a de novo tree comprising 408 315 sequences. We also incorporated explicit rank information provided by the NCBI taxonomy to group names (by prefixing rank designations) for better user orientation and classification consistency. The resulting merged taxonomy improved the classification of 75% of the sequences by one or more ranks relative to the original NCBI taxonomy with the most pronounced improvements occurring in under-classified environmental sequences. We also assessed candidate phyla (divisions) currently defined by NCBI and present recommendations for consolidation of 34 redundantly named groups. All intermediate results from the pipeline, which includes tree inference, jackknifing and transfer of a donor taxonomy to a recipient tree (tax2tree) are available for download. The improved Greengenes taxonomy should provide important infrastructure for a wide range of megasequencing projects studying ecosystems on scales ranging from our own bodies (the Human Microbiome Project) to the entire planet (the Earth Microbiome Project). The implementation of the software can be obtained from http://sourceforge.net/projects/tax2tree/.",
"title": ""
},
{
"docid": "385aacadafef9cfd1a5b00bbe8f871c0",
"text": "We present a 6.5mm3, 10mg, wireless peripheral nerve stimulator. The stimulator is powered and controlled through ultrasound from an external transducer and utilizes a single 750×750×750μm3 piezocrystal for downlink communication, powering, and readout, reducing implant volume and mass. An IC with 0.06mm2 active circuit area, designed in TSMC 65nm LPCMOS process, converts harvested ultrasound to stimulation charge with a peak efficiency of 82%. A custom wireless protocol that does not require a clock or memory circuits reduces on-chip power to 4μW when not stimulating. The encapsulated stimulator was cuffed to the sciatic nerve of an anesthetized rodent and demonstrated full-scale nerve activation in vivo. We achieve a highly efficient and temporally precise wireless peripheral nerve stimulator that is the smallest and lightest to our knowledge.",
"title": ""
},
{
"docid": "17c922d0b04c83a30c32c4967b81e2ca",
"text": "Distant supervision (DS) is an appealing learning method which learns from existing relational facts to extract more from a text corpus. However, the accuracy is still not satisfying. In this paper, we point out and analyze some critical factors in DS which have great impact on accuracy, including valid entity type detection, negative training examples construction and ensembles. We propose an approach to handle these factors. By experimenting on Wikipedia articles to extract the facts in Freebase (the top 92 relations), we show the impact of these three factors on the accuracy of DS and the remarkable improvement led by the proposed approach.",
"title": ""
},
{
"docid": "429f27ab8039a9e720e9122f5b1e3bea",
"text": "We give a new method for direct reconstruction of three-dimensional objects from a few electron micrographs taken at angles which need not exceed a range of 60 degrees. The method works for totally asymmetric objects, and requires little computer time or storage. It is also applicable to X-ray photography, and may greatly reduce the exposure compared to current methods of body-section radiography.",
"title": ""
},
{
"docid": "eebcb6810135300bc5fb1c3c70502a5c",
"text": "Many governments are considering adopting the smart city concept in their cities and implementing big data applications that support smart city components to reach the required level of sustainability and improve the living standards. Smart cities utilize multiple technologies to improve the performance of health, transportation, energy, education, and water services leading to higher levels of comfort of their citizens. This involves reducing costs and resource consumption in addition to more effectively and actively engaging with their citizens. One of the recent technologies that has a huge potential to enhance smart city services is big data analytics. As digitization has become an integral part of everyday life, data collection has resulted in the accumulation of huge amounts of data that can be used in various beneficial application domains. Effective analysis and utilization of big data is a key factor for success in many business and service domains, including the smart city domain. This paper reviews the applications of big data to support smart cities. It discusses and compares different definitions of the smart city and big data and explores the opportunities, challenges and benefits of incorporating big data applications for smart cities. In addition it attempts to identify the requirements that support the implementation of big data applications for smart city services. The review reveals that several opportunities are available for utilizing big data in smart cities; however, there are still many issues and challenges to be addressed to achieve better utilization of this technology.",
"title": ""
},
{
"docid": "0dc9f8f65efd02f16fea77d910fd73c7",
"text": "The visual system is the most studied sensory pathway, which is partly because visual stimuli have rather intuitive properties. There are reasons to think that the underlying principle ruling coding, however, is the same for vision and any other type of sensory signal, namely the code has to satisfy some notion of optimality--understood as minimum redundancy or as maximum transmitted information. Given the huge variability of natural stimuli, it would seem that attaining an optimal code is almost impossible; however, regularities and symmetries in the stimuli can be used to simplify the task: symmetries allow predicting one part of a stimulus from another, that is, they imply a structured type of redundancy. Optimal coding can only be achieved once the intrinsic symmetries of natural scenes are understood and used to the best performance of the neural encoder. In this paper, we review the concepts of optimal coding and discuss the known redundancies and symmetries that visual scenes have. We discuss in depth the only approach which implements the three of them known so far: translational invariance, scale invariance and multiscaling. Not surprisingly, the resulting code possesses features observed in real visual systems in mammals.",
"title": ""
},
{
"docid": "1d54618924aa4817e9dc3e085fe514e6",
"text": "Efforts have been recently made to construct ontologies for network security. The proposed ontologies are related to specific aspects of network security. Therefore, it is necessary to identify the specific aspects covered by existing ontologies for network security. A review and analysis of the principal issues, challenges, and the extent of progress related to distinct ontologies was performed. Each example was classified according to the typology of the ontologies for network security. Some aspects include identifying threats, intrusion detection systems (IDS), alerts, attacks, countermeasures, security policies, and network management tools. The research performed here proposes the use of three stages: 1. Inputs; 2. Processing; and 3. Outputs. The analysis resulted in the introduction of new challenges and aspects that may be used as the basis for future research. One major issue that was discovered identifies the need to develop new ontologies that relate to distinct aspects of network security, thereby facilitating management tasks.",
"title": ""
},
{
"docid": "74328635f7a9a24b3a535df29e9045fd",
"text": "This paper presents Star-EDT—a novel deterministic test compression scheme. The proposed solution seamlessly integrates with EDT-based compression and takes advantage of two key observations: 1) there exist clusters of test vectors that can detect many random-resistant faults with a cluster comprising a parent pattern and its derivatives obtained through simple transformations and 2) a significant majority of specified positions of ATPG-produced test cubes are typically clustered within a single or, at most, a few scan chains. The Star-EDT approach elevates compression ratios to values typically unachievable through conventional reseeding-based solutions. Experimental results obtained for large industrial designs, including those with a new class of test points aware of ATPG-induced conflicts, illustrate feasibility of the proposed deterministic test scheme and are reported herein. In particular, they confirm that the Star-EDT can act as a valuable form of deterministic BIST.",
"title": ""
},
{
"docid": "f5e934d65fa436cdb8e5cfa81ea29028",
"text": "Recently, there has been substantial research on augmenting aggregate forecasts with individual consumer data from internet platforms, such as search traffic or social network shares. Although the majority of studies report increased accuracy, many exhibit design weaknesses including lack of adequate benchmarks or rigorous evaluation. Furthermore, their usefulness over the product life-cycle has not been investigated, which may change, as initially, consumers may search for pre-purchase information, but later for after-sales support. In this study, we first review the relevant literature and then attempt to support the key findings using two forecasting case studies. Our findings are in stark contrast to the literature, and we find that established univariate forecasting benchmarks, such as exponential smoothing, consistently perform better than when online information is included. Our research underlines the need for thorough forecast evaluation and argues that online platform data may be of limited use for supporting operational decisions.",
"title": ""
},
{
"docid": "8ca30cd6fd335024690837c137f0d1af",
"text": "Non-negative matrix factorization (NMF) is a recently deve loped technique for finding parts-based, linear representations of non-negative data. Although it h as successfully been applied in several applications, it does not always result in parts-based repr esentations. In this paper, we show how explicitly incorporating the notion of ‘sparseness’ impro ves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF a nd for our extension. Our hope is that this will further the application of these methods to olving novel data-analysis problems.",
"title": ""
}
] |
scidocsrr
|
69293a4cb7fbac3fa383d843b25f69ca
|
The impact of industrial wearable system on industry 4.0
|
[
{
"docid": "be3721ebf2c55972146c3e87aee475ba",
"text": "Advances in computation and communication are taking shape in the form of the Internet of Things, Machine-to-Machine technology, Industry 4.0, and Cyber-Physical Systems (CPS). The impact on engineering such systems is a new technical systems paradigm based on ensembles of collaborating embedded software systems. To successfully facilitate this paradigm, multiple needs can be identified along three axes: (i) online configuring an ensemble of systems, (ii) achieving a concerted function of collaborating systems, and (iii) providing the enabling infrastructure. This work focuses on the collaborative function dimension and presents a set of concrete examples of CPS challenges. The examples are illustrated based on a pick and place machine that solves a distributed version of the Towers of Hanoi puzzle. The system includes a physical environment, a wireless network, concurrent computing resources, and computational functionality such as, service arbitration, various forms of control, and processing of streaming video. The pick and place machine is of medium-size complexity. It is representative of issues occurring in industrial systems that are coming online. The entire study is provided at a computational model level, with the intent to contribute to the model-based research agenda in terms of design methods and implementation technologies necessary to make the next generation systems a reality.",
"title": ""
}
] |
[
{
"docid": "cd4e04370b1e8b1f190a3533c3f4afe2",
"text": "Perception of depth is a central problem m machine vision. Stereo is an attractive technique for depth perception because, compared with monocular techniques, it leads to more direct, unambiguous, and quantitative depth measurements, and unlike \"active\" approaches such as radar and laser ranging, it is suitable in almost all application domains. Computational stereo is broadly defined as the recovery of the three-dimensional characteristics of a scene from multiple images taken from different points of view. First, each of the functional components of the computational stereo paradigm--image acquLsition, camera modeling, feature acquisition, image matching, depth determination, and interpolation--is identified and discussed. Then, the criteria that are important for evaluating the effectiveness of various computational stereo techniques are presented. Finally a representative sampling of computational stereo research is provided.",
"title": ""
},
{
"docid": "9d33565dbd5148730094a165bb2e968f",
"text": "The demand for greater battery life in low-power consumer electronics and implantable medical devices presents a need for improved energy efficiency in the management of small rechargeable cells. This paper describes an ultra-compact analog lithium-ion (Li-ion) battery charger with high energy efficiency. The charger presented here utilizes the tanh basis function of a subthreshold operational transconductance amplifier to smoothly transition between constant-current and constant-voltage charging regimes without the need for additional area- and power-consuming control circuitry. Current-domain circuitry for end-of-charge detection negates the need for precision-sense resistors in either the charging path or control loop. We show theoretically and experimentally that the low-frequency pole-zero nature of most battery impedances leads to inherent stability of the analog control loop. The circuit was fabricated in an AMI 0.5-μm complementary metal-oxide semiconductor process, and achieves 89.7% average power efficiency and an end voltage accuracy of 99.9% relative to the desired target 4.2 V, while consuming 0.16 mm2 of chip area. To date and to the best of our knowledge, this design represents the most area-efficient and most energy-efficient battery charger circuit reported in the literature.",
"title": ""
},
{
"docid": "b37db75dcd62cc56977d1a28a81be33e",
"text": "In this article we report on a new digital interactive self-report method for the measurement of human affect. The AffectButton (Broekens & Brinkman, 2009) is a button that enables users to provide affective feedback in terms of values on the well-known three affective dimensions of Pleasure (Valence), Arousal and Dominance. The AffectButton is an interface component that functions and looks like a medium-sized button. The button presents one dynamically changing iconic facial expression that changes based on the coordinates of the user’s pointer in the button. To give affective feedback the user selects the most appropriate expression by clicking the button, effectively enabling 1-click affective self-report on 3 affective dimensions. Here we analyze 5 previously published studies, and 3 novel large-scale studies (n=325, n=202, n=128). Our results show the reliability, validity, and usability of the button for acquiring three types of affective feedback in various domains. The tested domains are holiday preferences, real-time music annotation, emotion words, and textual situation descriptions (ANET). The types of affective feedback tested are preferences, affect attribution to the previously mentioned stimuli, and self-reported mood. All of the subjects tested were Dutch and aged between 15 and 56 years. We end this article with a discussion of the limitations of the AffectButton and of its relevance to areas including recommender systems, preference elicitation, social computing, online surveys, coaching and tutoring, experimental psychology and psychometrics, content annotation, and game consoles.",
"title": ""
},
{
"docid": "c11fe7d0d9786845cadf633a8ceea46d",
"text": "Introduction. Circumcision is a common procedure carried out around the world. Due to religious reasons, it is routinely done in Bangladesh, by both traditional as well as medically trained circumcisers. Complications include excessive bleeding, loss of foreskin, infection, and injury to the glans penis. Myiasis complicating male circumcision appears to be very rare. Case Presentation. In 2010, a 10-year-old boy presented to the OPD of Dhaka Medical College Hospital with severe pain in his penile region following circumcision 7-days after. The procedure was carried out by a traditional circumciser using unsterilized instruments and dressing material. After examination, unhealthy granulation tissue was seen and maggots started coming out from the site of infestation, indicating presence of more maggots underneath the skin. An emergency operation was carried out to remove the maggots and reconstruction was carried out at the plastic surgery department. Conclusion. There is scarcity of literature regarding complications following circumcision in developing countries. Most dangerous complications are a result of procedure carried out by traditional circumcisers who are inadequately trained. Incidence of such complications can be prevented by establishing a link between the formal and informal sections of healthcare to improve the safety of the procedure.",
"title": ""
},
{
"docid": "0b97ba6017a7f94ed34330555095f69a",
"text": "In response to stress, the brain activates several neuropeptide-secreting systems. This eventually leads to the release of adrenal corticosteroid hormones, which subsequently feed back on the brain and bind to two types of nuclear receptor that act as transcriptional regulators. By targeting many genes, corticosteroids function in a binary fashion, and serve as a master switch in the control of neuronal and network responses that underlie behavioural adaptation. In genetically predisposed individuals, an imbalance in this binary control mechanism can introduce a bias towards stress-related brain disease after adverse experiences. New candidate susceptibility genes that serve as markers for the prediction of vulnerable phenotypes are now being identified.",
"title": ""
},
{
"docid": "386feb461948b94809c0cc075e2b4002",
"text": "GPflow is a Gaussian process library that uses TensorFlow for its core computations and Python for its front end. The distinguishing features of GPflow are that it uses variational inference as the primary approximation method, provides concise code through the use of automatic differentiation, has been engineered with a particular emphasis on software testing and is able to exploit GPU hardware. 1. GPflow and TensorFlow are available as open source software under the Apache 2.0 license. c ©2017 Alexander G. de G. Matthews, Mark van der Wilk, Tom Nickson, Keisuke Fujii, Alexis Boukouvalas, Pablo León-Villagrá, Zoubin Ghahramani, and James Hensman. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v18/16-537.html.",
"title": ""
},
{
"docid": "1af5c5e20c1ce827f899dc70d0495bdc",
"text": "High power sources and high sensitivity detectors are highly in demand for terahertz imaging and sensing systems. Use of nano-antennas and nano-plasmonic light concentrators in photoconductive terahertz sources and detectors has proven to offer significantly higher terahertz radiation powers and detection sensitivities by enhancing photoconductor quantum efficiency while maintaining its ultrafast operation. This is because of the unique capability of nano-antennas and nano-plasmonic structures in manipulating the concentration of photo-generated carriers within the device active area, allowing a larger number of photocarriers to efficiently contribute to terahertz radiation and detection. An overview of some of the recent advancements in terahertz optoelectronic devices through use of various types of nano-antennas and nano-plasmonic light concentrators is presented in this article.",
"title": ""
},
{
"docid": "828d88119a34b73044ce407de98e37f8",
"text": "We propose a novel modular underwater robot which can self-reconfigure by stacking and unstacking its component modules. Applications for this robot include underwater monitoring, exploration, and surveillance. Our current prototype is a single module which contains several subsystems that later will be segregated into different modules. This robot functions as a testbed for the subsystems which are needed in the modular implementation. We describe the module design and discuss the propulsion, docking, and optical ranging subsystems in detail. Experimental results demonstrate depth control, linear motion, target module detection, and docking capabilities.",
"title": ""
},
{
"docid": "22629b96f1172328e654ea6ed6dccd92",
"text": "This paper uses the case of contract manufacturing in the electronics industry to illustrate an emergent American model of industrial organization, the modular production network. Lead firms in the modular production network concentrate on the creation, penetration, and defense of markets for end products—and increasingly the provision of services to go with them—while manufacturing capacity is shifted out-of-house to globally-operating turn-key suppliers. The modular production network relies on codified inter-firm links and the generic manufacturing capacity residing in turn-key suppliers to reduce transaction costs, build large external economies of scale, and reduce risk for network actors. I test the modular production network model against some of the key theoretical tools that have been developed to predict and explain industry structure: Joseph Schumpeter's notion of innovation in the giant firm, Alfred Chandler's ideas about economies of speed and the rise of the modern corporation, Oliver Williamson's transaction cost framework, and a range of other production network models that appear in the literature. I argue that the modular production network yields better economic performance in the context of globalization than more spatially and socially embedded network models. I view the emergence of the modular production network as part of a historical process of industrial transformation in which nationally-specific models of industrial organization co-evolve in intensifying rounds of competition, diffusion, and adaptation.",
"title": ""
},
{
"docid": "7afa24cc5aa346b79436c1b9b7b15b23",
"text": "Humans demonstrate remarkable abilities to predict physical events in complex scenes. Two classes of models for physical scene understanding have recently been proposed: “Intuitive Physics Engines”, or IPEs, which posit that people make predictions by running approximate probabilistic simulations in causal mental models similar in nature to video-game physics engines, and memory-based models, which make judgments based on analogies to stored experiences of previously encountered scenes and physical outcomes. Versions of the latter have recently been instantiated in convolutional neural network (CNN) architectures. Here we report four experiments that, to our knowledge, are the first rigorous comparisons of simulation-based and CNN-based models, where both approaches are concretely instantiated in algorithms that can run on raw image inputs and produce as outputs physical judgments such as whether a stack of blocks will fall. Both approaches can achieve super-human accuracy levels and can quantitatively predict human judgments to a similar degree, but only the simulation-based models generalize to novel situations in ways that people do, and are qualitatively consistent with systematic perceptual illusions and judgment asymmetries that people show.",
"title": ""
},
{
"docid": "8d67dab61a3085c98e5baba614ad0930",
"text": "In this paper, we propose a vehicle type classification method using a semisupervised convolutional neural network from vehicle frontal-view images. In order to capture rich and discriminative information of vehicles, we introduce sparse Laplacian filter learning to obtain the filters of the network with large amounts of unlabeled data. Serving as the output layer of the network, the softmax classifier is trained by multitask learning with small amounts of labeled data. For a given vehicle image, the network can provide the probability of each type to which the vehicle belongs. Unlike traditional methods by using handcrafted visual features, our method is able to automatically learn good features for the classification task. The learned features are discriminative enough to work well in complex scenes. We build the challenging BIT-Vehicle dataset, including 9850 high-resolution vehicle frontal-view images. Experimental results on our own dataset and a public dataset demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "5b6dab3a50df0cbbc086a46caf7f32d2",
"text": "Traveling to unfamiliar regions require a significant effort from novice travelers to plan where to go within a limited duration. In this paper, we propose a smart recommendation for highly efficient and balanced itineraries based on multiple user-generated GPS trajectories. Users only need to provide a minimal query composed of a start point, an end point and travel duration to receive an itinerary recommendation. To differentiate good itinerary candidates from less fulfilling ones, we describe how we model and define itinerary in terms of several characteristics mined from user-generated GPS trajectories. Further, we evaluated the efficiency of our method based on 17,745 user-generated GPS trajectories contributed by 125 users in Beijing, China. Also we performed a user study where current residents of Beijing used our system to review and give ratings to itineraries generated by our algorithm and baseline algorithms for comparison.",
"title": ""
},
{
"docid": "cc23c9f5d2c717a0e4c8f97668029abc",
"text": "We introduce a new representation learning algorithm suited to the context of domain adaptation, in which data at training and test time come from similar but different distributions. Our algorithm is directly inspired by theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on a data representation that cannot discriminate between the training (source) and test (target) domains. We propose a training objective that implements this idea in the context of a neural network, whose hidden layer is trained to be predictive of the classification task, but uninformative as to the domain of the input. Our experiments on a sentiment analysis classification benchmark, where the target domain data available at training time is unlabeled, show that our neural network for domain adaption algorithm has better performance than either a standard neural network or an SVM, even if trained on input features extracted with the state-of-theart marginalized stacked denoising autoencoders of Chen et al. (2012).",
"title": ""
},
{
"docid": "b7fa50099584f8d59b3bfb0cf35674fa",
"text": "A new modified ultra-wideband in-phase power divider for frequency band 2-18 GHz has been designed, successfully fabricated and tested. The power divider is based on the coupled strip-lines. Only two balanced resistors are used in the proposed structure. So the power divider has very low insertion loss. The capacitive strips placed over the resistors have been introduced in the suggested design as the novel elements. Due to the introduced capacitive strips the isolation and impedance matching of the divider outputs were improved at the high frequencies. The manufactured power divider shows very high measured performances (amplitude imbalance is ±0.2 dB, phase imbalance is 5°, insertion loss is 0.4 dB, isolation is -18 dB, VSWR = 1.5.",
"title": ""
},
{
"docid": "9d9086fbdfa46ded883b14152df7f5a5",
"text": "This paper presents a low power continuous time 2nd order Low Pass Butterworth filter operating at power supply of 0.5V suitably designed for biomedical applications. A 3-dB bandwidth of 100 Hz using technology node of 0.18μm is achieved. The operational transconductance amplifier is a significant building block in continuous time filter design. To achieve necessary voltage headroom a pseudo-differential architecture is used to design bulk driven transconductor. In contrast, to the gate-driven OTA bulk-driven have the ability to operate over a wide input range. The output common mode voltage of the transconductor is set by a Common Mode Feedback (CMFB) circuit. The simulation results show that the filter has a peak-to-peak signal swing of 150mV (differential) for 1% THD, a dynamic range of 74.62 dB and consumes a total power of 0.225μW when operating at a supply voltage of 0.5V. The Figure of Merit (FOM) achieved by the filter is 0.055 fJ, lowest among similar low-voltage filters found in the literature.",
"title": ""
},
{
"docid": "27b2148c05febeb1051c1d1229a397d6",
"text": "Modern database management systems essentially solve the problem of accessing and managing large volumes of related data on a single platform, or on a cluster of tightly-coupled platforms. But many problems remain when two or more databases need to work together. A fundamental problem is raised by semantic heterogeneity the fact that data duplicated across multiple databases is represented differently in the underlying database schemas. This tutorial describes fundamental problems raised by semantic heterogeneity and surveys theoretical frameworks that can provide solutions for them. The tutorial considers the following topics: (1) representative architectures for supporting database interoperation; (2) notions for comparing the “information capacity” of database schemas; (3) providing support for read-only integrated views of data, including the .virtual and materialized approaches; (4) providing support for read-write integrated views of data, including the issue of workflows on heterogeneous databases; and (5) research and tools for accessing and effectively using meta-data, e.g., to identify the relationships between schemas of different databases.",
"title": ""
},
{
"docid": "e0217457b00d4c1ba86fc5d9faede342",
"text": "This paper reviews the first challenge on efficient perceptual image enhancement with the focus on deploying deep learning models on smartphones. The challenge consisted of two tracks. In the first one, participants were solving the classical image super-resolution problem with a bicubic downscaling factor of 4. The second track was aimed at real-world photo enhancement, and the goal was to map low-quality photos from the iPhone 3GS device to the same photos captured with a DSLR camera. The target metric used in this challenge combined the runtime, PSNR scores and solutions’ perceptual results measured in the user study. To ensure the efficiency of the submitted models, we additionally measured their runtime and memory requirements on Android smartphones. The proposed solutions significantly improved baseline results defining the state-of-the-art for image enhancement on smartphones.",
"title": ""
},
{
"docid": "780095276d7ac3cae1b95b7a1ceee8b3",
"text": "This work presents a systematic study toward the design and first demonstration of high-performance n-type monolayer tungsten diselenide (WSe2) field effect transistors (FET) by selecting the contact metal based on understanding the physics of contact between metal and monolayer WSe2. Device measurements supported by ab initio density functional theory (DFT) calculations indicate that the d-orbitals of the contact metal play a key role in forming low resistance ohmic contacts with monolayer WSe2. On the basis of this understanding, indium (In) leads to small ohmic contact resistance with WSe2 and consequently, back-gated In-WSe2 FETs attained a record ON-current of 210 μA/μm, which is the highest value achieved in any monolayer transition-metal dichalcogenide- (TMD) based FET to date. An electron mobility of 142 cm(2)/V·s (with an ON/OFF current ratio exceeding 10(6)) is also achieved with In-WSe2 FETs at room temperature. This is the highest electron mobility reported for any back gated monolayer TMD material till date. The performance of n-type monolayer WSe2 FET was further improved by Al2O3 deposition on top of WSe2 to suppress the Coulomb scattering. Under the high-κ dielectric environment, electron mobility of Ag-WSe2 FET reached ~202 cm(2)/V·s with an ON/OFF ratio of over 10(6) and a high ON-current of 205 μA/μm. In tandem with a recent report of p-type monolayer WSe2 FET ( Fang , H . et al. Nano Lett. 2012 , 12 , ( 7 ), 3788 - 3792 ), this demonstration of a high-performance n-type monolayer WSe2 FET corroborates the superb potential of WSe2 for complementary digital logic applications.",
"title": ""
},
{
"docid": "fe116849575dd91759a6c1ef7ed239f3",
"text": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
"title": ""
},
{
"docid": "bdc82fead985055041171d63415f9dde",
"text": "We introduce a new corpus of sentence-level agreement and disagreement annotations over LiveJournal and Wikipedia threads. This is the first agreement corpus to offer full-document annotations for threaded discussions. We provide a methodology for coding responses as well as an implemented tool with an interface that facilitates annotation of a specific response while viewing the full context of the thread. Both the results of an annotator questionnaire and high inter-annotator agreement statistics indicate that the annotations collected are of high quality.",
"title": ""
}
] |
scidocsrr
|
9f8ed2cc136f2da0feeadc0fb2389145
|
5198-17 Agile for Everyone Else : Using Triggers and Checks to Create Agility Outside of Software Development
|
[
{
"docid": "c404e6ecb21196fec9dfeadfcb5d4e4b",
"text": "The goal of leading indicators for safety is to identify the potential for an accident before it occurs. Past efforts have focused on identifying general leading indicators, such as maintenance backlog, that apply widely in an industry or even across industries. Other recommendations produce more system-specific leading indicators, but start from system hazard analysis and thus are limited by the causes considered by the traditional hazard analysis techniques. Most rely on quantitative metrics, often based on probabilistic risk assessments. This paper describes a new and different approach to identifying system-specific leading indicators and provides guidance in designing a risk management structure to generate, monitor and use the results. The approach is based on the STAMP (SystemTheoretic Accident Model and Processes) model of accident causation and tools that have been designed to build on that model. STAMP extends current accident causality to include more complex causes than simply component failures and chains of failure events or deviations from operational expectations. It incorporates basic principles of systems thinking and is based on systems theory rather than traditional reliability theory.",
"title": ""
}
] |
[
{
"docid": "e4a3a52e297d268288aba404f0d24544",
"text": "The world is facing several challenges that must be dealt within the coming years such as efficient energy management, need for economic growth, security and quality of life of its habitants. The increasing concentration of the world population into urban areas puts the cities in the center of the preoccupations and makes them important actors for the world's sustainable development strategy. ICT has a substantial potential to help cities to respond to the growing demands of more efficient, sustainable, and increased quality of life in the cities, thus to make them \"smarter\". Smartness is directly proportional with the \"awareness\". Cyber-physical systems can extract the awareness information from the physical world and process this information in the cyber-world. Thus, a holistic integrated approach, from the physical to the cyber-world is necessary for a successful and sustainable smart city outcome. This paper introduces important research challenges that we believe will be important in the coming years and provides guidelines and recommendations to achieve self-aware smart city objectives.",
"title": ""
},
{
"docid": "2ba53ad9e9c015779cfb2aec51fe310f",
"text": "In the past few years, more and more researchers have paid close attention to the emerging field of delay tolerant networks (DTNs), in which network often partitions and end-to-end paths do not exist nearly all the time. To cope with these challenges, most routing protocols employ the \"store-carry-forward\" strategy to transmit messages. However, the difficulty of this strategy is how to choose the best relay node and determine the best time to forward messages. Fortunately, social relations among nodes can be used to address these problems. In this paper, we present a comprehensive survey of recent social-aware routing protocols, which offer an insight into how to utilize social relationships to design efficient and applicable routing algorithms in DTNs. First, we review the major practical applications of DTNs. Then, we focus on understanding social ties between nodes and investigating some design-related issues of social-based routing approaches, e.g., the ways to obtain social relations among nodes, the metrics and approaches to identify the characteristics of social ties, the strategies to optimize social-aware routing protocols, and the suitable mobility traces to evaluate these protocols. We also create a taxonomy for social-aware routing protocols according to the sources of social relations. Finally, we outline several open issues and research challenges.",
"title": ""
},
{
"docid": "36c3bd9e1203b9495d92a40c5fa5f2c0",
"text": "A 14-year-old boy presented with asymptomatic right hydronephrosis detected on routine yearly ultrasound examination. Previously, he had at least two normal renal ultrasonograms, 4 years after remission of acute myeloblastic leukemia, treated by AML-BFM-93 protocol. A function of the right kidney and no damage on the left was confirmed by a DMSA scan. Right retroperitoneoscopic nephrectomy revealed 3 renal arteries with the lower pole artery lying on the pelviureteric junction. Histologically chronic tubulointerstitial nephritis was detected. In the pathogenesis of this severe unilateral renal damage, we suspect the exacerbation of deleterious effects of cytostatic therapy on kidneys with intermittent hydronephrosis.",
"title": ""
},
{
"docid": "d60b1a9a23fe37813a24533104a74d70",
"text": "Online display advertising is a multi-billion dollar industry where advertisers promote their products to users by having publishers display their advertisements on popular Web pages. An important problem in online advertising is how to forecast the number of user visits for a Web page during a particular period of time. Prior research addressed the problem by using traditional time-series forecasting techniques on historical data of user visits; (e.g., via a single regression model built for forecasting based on historical data for all Web pages) and did not fully explore the fact that different types of Web pages and different time stamps have different patterns of user visits. In this paper, we propose a series of probabilistic latent class models to automatically learn the underlying user visit patterns among multiple Web pages and multiple time stamps. The last (and the most effective) proposed model identifies latent groups/classes of (i) Web pages and (ii) time stamps with similar user visit patterns, and learns a specialized forecast model for each latent Web page and time stamp class. Compared with a single regression model as well as several other baselines, the proposed latent class model approach has the capability of differentiating the importance of different types of information across different classes of Web pages and time stamps, and therefore has much better modeling flexibility. An extensive set of experiments along with detailed analysis carried out on real-world data from Yahoo! demonstrates the advantage of the proposed latent class models in forecasting online user visits in online display advertising.",
"title": ""
},
{
"docid": "1efd6da40ac525921b63257d9a3990be",
"text": "Movie plot summaries are expected to reflect the genre of movies since many spectators read the plot summaries before deciding to watch a movie. In this study, we perform movie genre classification from plot summaries of movies using bidirectional LSTM (Bi-LSTM). We first divide each plot summary of a movie into sentences and assign the genre of corresponding movie to each sentence. Next, using the word representations of sentences, we train Bi-LSTM networks. We estimate the genres for each sentence separately. Since plot summaries generally contain multiple sentences, we use majority voting for the final decision by considering the posterior probabilities of genres assigned to sentences. Our results reflect that, training Bi-LSTM network after dividing the plot summaries into their sentences and fusing the predictions for individual sentences outperform training the network with the whole plot summaries with the limited amount of data. Moreover, employing Bi-LSTM performs better compared to basic Recurrent Neural Networks (RNNs) and Logistic Regression (LR) as a baseline.",
"title": ""
},
{
"docid": "0dde4746ba5e3c33fbe88b93f6d01f8d",
"text": "In this paper, we study the application of Extreme Learning Machine (ELM) algorithm for single layered feedforward neural networks to non-linear chaotic time series problems. In this algorithm the input weights and the hidden layer bias are randomly chosen. The ELM formulation leads to solving a system of linear equations in terms of the unknown weights connecting the hidden layer to the output layer. The solution of this general system of linear equations will be obtained using Moore-Penrose generalized pseudo inverse. For the study of the application of the method we consider the time series generated by the Mackey Glass delay differential equation with different time delays, Santa Fe A and UCR heart beat rate ECG time series. For the choice of sigmoid, sin and hardlim activation functions the optimal values for the memory order and the number of hidden neurons which give the best prediction performance in terms of root mean square error are determined. It is observed that the results obtained are in close agreement with the exact solution of the problems considered which clearly shows that ELM is a very promising alternative method for time series prediction. Keywords—Chaotic time series, Extreme learning machine, Generalization performance.",
"title": ""
},
{
"docid": "36f73143b6f4d80e8f1d77505fabbfcf",
"text": "Progress of IoT and ubiquitous computing technologies has strong anticipation to realize smart services in households such as efficient energy-saving appliance control and elderly monitoring. In order to put those applications into practice, high-accuracy and low-cost in-home living activity recognition is essential. Many researches have tackled living activity recognition so far, but the following problems remain: (i)privacy exposure due to utilization of cameras and microphones; (ii) high deployment and maintenance costs due to many sensors used; (iii) burden to force the user to carry the device and (iv) wire installation to supply power and communication between sensor node and server; (v) few recognizable activities; (vi) low recognition accuracy. In this paper, we propose an in-home living activity recognition method to solve all the problems. To solve the problems (i)--(iv), our method utilizes only energy harvesting PIR and door sensors with a home server for data collection and processing. The energy harvesting sensor has a solar cell to drive the sensor and wireless communication modules. To solve the problems (v) and (vi), we have tackled the following challenges: (a) determining appropriate features for training samples; and (b) determining the best machine learning algorithm to achieve high recognition accuracy; (c) complementing the dead zone of PIR sensor semipermanently. We have conducted experiments with the sensor by five subjects living in a home for 2-3 days each. As a result, the proposed method has achieved F-measure: 62.8% on average.",
"title": ""
},
{
"docid": "36b97ad6508f40acfaba05318d65211a",
"text": "Actinomycotic infections are known to have an association with difficulties in diagnosis and treatment. These infections usually involve the head, neck, thorax, and abdomen. Actinomycosis of the upper lip is a rare condition and an important one as well, because it can imitate other diseases. As the initial impression, it can easily be mistaken for a mucocele, venous lake, or benign neoplasm. An 82-year-old man presented with an asymptomatic normal skin colored nodule on the upper lip. Histopathologic findings showed an abscess and sulfur granules in the dermis. Gram staining results showed a mesh of branching rods. In this report, we present an unusual case of actinomycosis of the upper lip and discuss its characteristics and therapeutic modalities.",
"title": ""
},
{
"docid": "38218e2f723dd33509f5acdd401e2f53",
"text": "Tissue-resident macrophages are highly heterogeneous in terms of their functions and phenotypes as a consequence of adaptation to different tissue environments. Local tissue-derived signals are thought to control functional polarization of resident macrophages; however, the identity of these signals remains largely unknown. It is also unknown whether functional heterogeneity is a result of irreversible lineage-specific differentiation or a consequence of continuous but reversible induction of diverse functional programs. Here, we identified retinoic acid as a signal that induces tissue-specific localization and functional polarization of peritoneal macrophages through the reversible induction of transcription factor GATA6. We further found that GATA6 in macrophages regulates gut IgA production through peritoneal B-1 cells. These results provide insight into the regulation of tissue-resident macrophage functional specialization by tissue-derived signals.",
"title": ""
},
{
"docid": "201d9105d956bc8cb8d692490d185487",
"text": "BACKGROUND\nDespite its evident clinical benefits, single-incision laparoscopic surgery (SILS) imposes inherent limitations of collision between external arms and inadequate triangulation because multiple instruments are inserted through a single port at the same time.\n\n\nMETHODS\nA robot platform appropriate for SILS was developed wherein an elbowed instrument can be equipped to easily create surgical triangulation without the interference of robot arms. A novel joint mechanism for a surgical instrument actuated by a rigid link was designed for high torque transmission capability.\n\n\nRESULTS\nThe feasibility and effectiveness of the robot was checked through three kinds of preliminary tests: payload, block transfer, and ex vivo test. Measurements showed that the proposed robot has a payload capability >15 N with 7 mm diameter.\n\n\nCONCLUSIONS\nThe proposed robot is effective and appropriate for SILS, overcoming inadequate triangulation and improving workspace and traction force capability.",
"title": ""
},
{
"docid": "7288fa9dc9cea8b3dc0abea0984de6f6",
"text": "In recent years, deep neural networks have been shown to be effective in many classification tasks, including music genre classification. In this paper, we proposed two ways to improve music genre classification with convolutional neural networks: 1) combining maxand averagepooling to provide more statistical information to higher level neural networks; 2) using shortcut connections to skip one or more layers, a method inspired by residual learning method. The input of the CNN is simply the short time Fourier transforms of the audio signal. The output of the CNN is fed into another deep neural network to do classification. By comparing two different network topologies, our preliminary experimental results on the GTZAN data set show that the above two methods can effectively improve the classification accuracy, especially the second one.",
"title": ""
},
{
"docid": "3165b876e7e1bcdccc261593235078f8",
"text": "The next challenge of game AI lies in Real Time Strategy (RTS) games. RTS games provide partially observable gaming environments, where agents interact with one another in an action space much larger than that of GO. Mastering RTS games requires both strong macro strategies and delicate micro level execution. Recently, great progress has been made in micro level execution, while complete solutions for macro strategies are still lacking. In this paper, we propose a novel learning-based Hierarchical Macro Strategy model for mastering MOBA games, a sub-genre of RTS games. Trained by the Hierarchical Macro Strategy model, agents explicitly make macro strategy decisions and further guide their micro level execution. Moreover, each of the agents makes independent strategy decisions, while simultaneously communicating with the allies through leveraging a novel imitated crossagent communication mechanism. We perform comprehensive evaluations on a popular 5v5 Multiplayer Online Battle Arena (MOBA) game. Our 5-AI team achieves a 48% winning rate against human player teams which are ranked top 1% in the player ranking system.",
"title": ""
},
{
"docid": "bf83b9fef9b4558538b2207ba57b4779",
"text": "This paper presents preliminary results for the design, development and evaluation of a hand rehabilitation glove fabricated using soft robotic technology. Soft actuators comprised of elastomeric materials with integrated channels that function as pneumatic networks (PneuNets), are designed and geometrically analyzed to produce bending motions that can safely conform with the human finger motion. Bending curvature and force response of these actuators are investigated using geometrical analysis and a finite element model (FEM) prior to fabrication. The fabrication procedure of the chosen actuator is described followed by a series of experiments that mechanically characterize the actuators. The experimental data is compared to results obtained from FEM simulations showing good agreement. Finally, an open-palm glove design and the integration of the actuators to it are described, followed by a qualitative evaluation study.",
"title": ""
},
{
"docid": "ed9beb7f6ffc65439f34294dec11a966",
"text": "CONTEXT\nA variety of ankle self-stretching exercises have been recommended to improve ankle-dorsiflexion range of motion (DFROM) in individuals with limited ankle dorsiflexion. A strap can be applied to stabilize the talus and facilitate anterior glide of the distal tibia at the talocrural joint during ankle self-stretching exercises. Novel ankle self-stretching using a strap (SSS) may be a useful method of improving ankle DFROM.\n\n\nOBJECTIVE\nTo compare the effects of 2 ankle-stretching techniques (static stretching versus SSS) on ankle DFROM.\n\n\nDESIGN\nRandomized controlled clinical trial.\n\n\nSETTING\nUniversity research laboratory.\n\n\nPATIENTS OR OTHER PARTICIPANTS\nThirty-two participants with limited active dorsiflexion (<20°) while sitting (14 women and 18 men) were recruited.\n\n\nMAIN OUTCOME MEASURE(S)\nThe participants performed 2 ankle self-stretching techniques (static stretching and SSS) for 3 weeks. Active DFROM (ADFROM), passive DFROM (PDFROM), and the lunge angle were measured. An independent t test was used to compare the improvements in these values before and after the 2 stretching interventions. The level of statistical significance was set at α = .05.\n\n\nRESULTS\nActive DFROM and PDFROM were greater in both stretching groups after the 3-week interventions. However, ADFROM, PDFROM, and the lunge angle were greater in the SSS group than in the static-stretching group (P < .05).\n\n\nCONCLUSIONS\nAnkle SSS is recommended to improve ADFROM, PDFROM, and the lunge angle in individuals with limited DFROM.",
"title": ""
},
{
"docid": "5339bd241f053214673ead767476077d",
"text": "----------------------------------------------------------------------ABSTRACT----------------------------------------------------------This paper is a general survey of all the security issues existing in the Internet of Things (IoT) along with an analysis of the privacy issues that an end-user may face as a consequence of the spread of IoT. The majority of the survey is focused on the security loopholes arising out of the information exchange technologies used in Internet of Things. No countermeasure to the security drawbacks has been analyzed in the paper.",
"title": ""
},
{
"docid": "beff5ce5202460e736af0f06d5d75f83",
"text": "MOTIVATION\nDuring the past decade, the new focus on genomics has highlighted a particular challenge: to integrate the different views of the genome that are provided by various types of experimental data.\n\n\nRESULTS\nThis paper describes a computational framework for integrating and drawing inferences from a collection of genome-wide measurements. Each dataset is represented via a kernel function, which defines generalized similarity relationships between pairs of entities, such as genes or proteins. The kernel representation is both flexible and efficient, and can be applied to many different types of data. Furthermore, kernel functions derived from different types of data can be combined in a straightforward fashion. Recent advances in the theory of kernel methods have provided efficient algorithms to perform such combinations in a way that minimizes a statistical loss function. These methods exploit semidefinite programming techniques to reduce the problem of finding optimizing kernel combinations to a convex optimization problem. Computational experiments performed using yeast genome-wide datasets, including amino acid sequences, hydropathy profiles, gene expression data and known protein-protein interactions, demonstrate the utility of this approach. A statistical learning algorithm trained from all of these data to recognize particular classes of proteins--membrane proteins and ribosomal proteins--performs significantly better than the same algorithm trained on any single type of data.\n\n\nAVAILABILITY\nSupplementary data at http://noble.gs.washington.edu/proj/sdp-svm",
"title": ""
},
{
"docid": "fec4e9bb14c0071e64a5d6c25281a8d1",
"text": "Currently, we are witnessing a growing trend in the study and application of problems in the framework of Big Data. This is mainly due to the great advantages which come from the knowledge extraction from a high volume of information. For this reason, we observe a migration of the standard Data Mining systems towards a new functional paradigm that allows at working with Big Data. By means of the MapReduce model and its different extensions, scalability can be successfully addressed, while maintaining a good fault tolerance during the execution of the algorithms. Among the different approaches used in Data Mining, those models based on fuzzy systems stand out for many applications. Among their advantages, we must stress the use of a representation close to the natural language. Additionally, they use an inference model that allows a good adaptation to different scenarios, especially those with a given degree of uncertainty. Despite the success of this type of systems, their migration to the Big Data environment in the different learning areas is at a preliminary stage yet. In this paper, we will carry out an overview of the main existing proposals on the topic, analyzing the design of these models. Additionally, we will discuss those problems related to the data distribution and parallelization of the current algorithms, and also its relationship with the fuzzy representation of the information. Finally, we will provide our view on the expectations for the future in this framework according to the design of those methods based on fuzzy sets, as well as the open challenges on the topic.",
"title": ""
},
{
"docid": "c66b9dbc0321fe323a519aff49da6bb5",
"text": "Stratum, the de-facto mining communication protocol used by blockchain based cryptocurrency systems, enables miners to reliably and efficiently fetch jobs from mining pool servers. In this paper we exploit Stratum’s lack of encryption to develop passive and active attacks on Bitcoin’s mining protocol, with important implications on the privacy, security and even safety of mining equipment owners. We introduce StraTap and ISP Log attacks, that infer miner earnings if given access to miner communications, or even their logs. We develop BiteCoin, an active attack that hijacks shares submitted by miners, and their associated payouts. We build BiteCoin on WireGhost, a tool we developed to hijack and surreptitiously maintain Stratum connections. Our attacks reveal that securing Stratum through pervasive encryption is not only undesirable (due to large overheads), but also ineffective: an adversary can predict miner earnings even when given access to only packet timestamps. Instead, we devise Bedrock, a minimalistic Stratum extension that protects the privacy and security of mining participants. We introduce and leverage the mining cookie concept, a secret that each miner shares with the pool and includes in its puzzle computations, and that prevents attackers from reconstructing or hijacking the puzzles. We have implemented our attacks and collected 138MB of Stratum protocol traffic from mining equipment in the US and Venezuela. We show that Bedrock is resilient to active attacks even when an adversary breaks the crypto constructs it uses. Bedrock imposes a daily overhead of 12.03s on a single pool server that handles mining traffic from 16,000 miners.",
"title": ""
},
{
"docid": "1b8394f45b88f2474f72c500fc0a6fe4",
"text": "User-Generated live video streaming systems are services that allow anybody to broadcast a video stream over the Internet. These Over-The-Top services have recently gained popularity, in particular with e-sport, and can now be seen as competitors of the traditional cable TV. In this paper, we present a dataset for further works on these systems. This dataset contains data on the two main user-generated live streaming systems: Twitch and the live service of YouTube. We got three months of traces of these services from January to April 2014. Our dataset includes, at every five minutes, the identifier of the online broadcaster, the number of people watching the stream, and various other media information. In this paper, we introduce the dataset and we make a preliminary study to show the size of the dataset and its potentials. We first show that both systems generate a significant traffic with frequent peaks at more than 1 Tbps. Thanks to more than a million unique uploaders, Twitch is in particular able to offer a rich service at anytime. Our second main observation is that the popularity of these channels is more heterogeneous than what have been observed in other services gathering user-generated content.",
"title": ""
}
] |
scidocsrr
|
1271f92970087d9cceae152fa2041f5a
|
Modelling the Scene Dependent Imaging in Cameras with a Deep Neural Network
|
[
{
"docid": "135d451e66cdc8d47add47379c1c35f9",
"text": "We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind denoising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.",
"title": ""
},
{
"docid": "b480111b47176fe52cd6f9ca296dc666",
"text": "We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning. Fig. 1: Our automatic colorization of grayscale input; more examples in Figs. 3 and 4.",
"title": ""
},
{
"docid": "bf4ca26749ab17967210d9381222bb33",
"text": "To produce images that are suitable for display, tone-mapping is widely used in digital cameras to map linear color measurements into narrow gamuts with limited dynamic range. This introduces non-linear distortion that must be undone, through a radiometric calibration process, before computer vision systems can analyze such photographs radiometrically. This paper considers the inherent uncertainty of undoing the effects of tone-mapping. We observe that this uncertainty varies substantially across color space, making some pixels more reliable than others. We introduce a model for this uncertainty and a method for fitting it to a given camera or imaging pipeline. Once fit, the model provides for each pixel in a tone-mapped digital photograph a probability distribution over linear scene colors that could have induced it. We demonstrate how these distributions can be useful for visual inference by incorporating them into estimation algorithms for a representative set of vision tasks.",
"title": ""
}
] |
[
{
"docid": "4d14e2a47d68b6113466b1e096c924ee",
"text": "In this paper, we experimentally realized a steering antenna using a type of active metamaterial with tunable refractive index. The metamaterial is realized by periodically printed subwavelength metallic resonant patterns with embedded microwave varactors. The effective refractive index can be controlled by low direct-current (dc) bias voltage applied to the varactors. In-phase electromagnetic waves transmitting in different zones of such metamaterial slab experience different phase delays, and, consequently, the output direction of the transmitted wave can be steered with progressive phase shift along the interface. This antenna has a simple structure, is very easy to configure the beam direction, and has a low cost. Compared with conventional phased-array antennas, the radome approach has more flexibility to operate with different feeding antennas for various applications.",
"title": ""
},
{
"docid": "a93c8fbeee229a9a6d65927658c2fa31",
"text": "We present a simple, efficient method of realtime articulated arm pose estimation using stochastic gradient descent to correct unmodeled errors in the robot’s kinematics with point cloud data from commercial depth sensors. We show that our method is robust to error in both the robot’s joint encoders and in the extrinsic calibration of the sensor; and that it is both fast and accurate enough to provide realtime performance for autonomous manipulation tasks. The efficiency of our technique allows us to embed it in a closedloop position servoing strategy; which we extensively use to perform manipulation tasks. Our method is generalizable to any articulated robot, including dexterous humanoids and mobile manipulators with multiple kinematic chains.",
"title": ""
},
{
"docid": "c73b65bced395eae228869186e254105",
"text": "Energy consumption has become a major constraint on the capabilities of computer systems. In large systems the energy consumed by Dynamic Random Access Memories (DRAM) is a significant part of the total energy consumption. It is possible to calculate the energy consumption of currently available DRAMs from their datasheets, but datasheets don’t allow extrapolation to future DRAM technologies and don’t show how other changes like increasing bandwidth requirements change DRAM energy consumption. This paper first presents a flexible DRAM power model which uses a description of DRAM architecture, technology and operation to calculate power usage and verifies it against datasheet values. Then the model is used together with assumptions about the DRAM roadmap to extrapolate DRAM energy consumption to future DRAM generations. Using this model we evaluate some of the proposed DRAM power reduction schemes.",
"title": ""
},
{
"docid": "75a3013316a013ac472c6dffefd516ee",
"text": "We propose a technique for joint calibration of a wide-angle rolling shutter camera (e.g. a GoPro) and an externally mounted gyroscope. The calibrated parameters are time scaling and offset, relative pose between gyroscope and camera, and gyroscope bias. The parameters are found using non-linear least squares minimisation using the symmetric transfer error as cost function. The primary contribution is methods for robust initialisation of the relative pose and time offset, which are essential for convergence. We also introduce a robust error norm to handle outliers. This results in a technique that works with general video content and does not require any specific setup or calibration patterns. We apply our method to stabilisation of videos recorded by a rolling shutter camera, with a rigidly attached gyroscope. After recording, the gyroscope and camera are jointly calibrated using the recorded video itself. The recorded video can then be stabilised using the calibrated parameters. We evaluate the technique on video sequences with varying difficulty and motion frequency content. The experiments demonstrate that our method can be used to produce high quality stabilised videos even under difficult conditions, and that the proposed initialisation is shown to end up within the basin of attraction. We also show that a residual based on the symmetric transfer error is more accurate than residuals based on the recently proposed epipolar plane normal coplanarity constraint, and that the use of robust errors is a critical component to obtain an accurate calibration.",
"title": ""
},
{
"docid": "be8b65d39ee74dbee0835052092040da",
"text": "We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact. Adopting a straightforward decomposition of the problem into entity detection, entity linking, relation prediction, and evidence combination, we explore simple yet strong baselines. On the popular SIMPLEQUESTIONS dataset, we find that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach the state of the art, and techniques that do not use neural networks also perform reasonably well. These results show that gains from sophisticated deep learning techniques proposed in the literature are quite modest and that some previous models exhibit unnecessary complexity.",
"title": ""
},
{
"docid": "77aeedfecd1529579c66a4e866a8ad27",
"text": "The massive amount of data in social media platforms is a key source for companies to analyze customer sentiment and opinions. Many existing sentiment analysis approaches solely rely on textual contents of a sentence (e.g. words) for sentiment identification. Consequently, current sentiment analysis systems are ineffective for analyzing contents in social media because people may use non-standard language (e.g., abbreviations, misspellings, emoticons or multiple languages) in online platforms. Inspired by the attribution theory that is grounded in social psychology, we propose a sentiment analysis framework that considers the social relationships among users and contents. We conduct experiments to compare the proposed approach against the existing approaches on a dataset collected from Facebook. The results indicate that we can more accurately classify sentiment of sentences by utilizing social relationships. The results have important implications for companies to analyze customer opinions.",
"title": ""
},
{
"docid": "4b987a98974c80f1d69bf8ecdb8e4327",
"text": "This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels (i.e. some of their labels are missing). To handle missing labels, we propose a unified model of label dependencies by constructing a mixed graph, which jointly incorporates (i) instance-level similarity and class co-occurrence as undirected edges and (ii) semantic label hierarchy as directed edges. Unlike most MLML methods, We formulate this learning problem transductively as a convex quadratic matrix optimization problem that encourages training label consistency and encodes both types of label dependencies (i.e. undirected and directed edges) using quadratic terms and hard linear constraints. The alternating direction method of multipliers (ADMM) can be used to exactly and efficiently solve this problem. To evaluate our proposed method, we consider two popular applications (image and video annotation), where the label hierarchy can be derived from Wordnet. Experimental results show that our method achieves a significant improvement over state-of-the-art methods in performance and robustness to missing labels.",
"title": ""
},
{
"docid": "5fc192fc2f5be64a69eea7c4e848dd95",
"text": "Hypertrophic scars and keloids are fibroproliferative disorders that may arise after any deep cutaneous injury caused by trauma, burns, surgery, etc. Hypertrophic scars and keloids are cosmetically problematic, and in combination with functional problems such as contractures and subjective symptoms including pruritus, these significantly affect patients' quality of life. There have been many studies on hypertrophic scars and keloids; but the mechanisms underlying scar formation have not yet been well established, and prophylactic and treatment strategies remain unsatisfactory. In this review, the authors introduce and summarize classical concepts surrounding wound healing and review recent understandings of the biology, prevention and treatment strategies for hypertrophic scars and keloids.",
"title": ""
},
{
"docid": "0ec7a27ed4d89909887b08c5ea823756",
"text": "Brain responses to pain, assessed through positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) are reviewed. Functional activation of brain regions are thought to be reflected by increases in the regional cerebral blood flow (rCBF) in PET studies, and in the blood oxygen level dependent (BOLD) signal in fMRI. rCBF increases to noxious stimuli are almost constantly observed in second somatic (SII) and insular regions, and in the anterior cingulate cortex (ACC), and with slightly less consistency in the contralateral thalamus and the primary somatic area (SI). Activation of the lateral thalamus, SI, SII and insula are thought to be related to the sensory-discriminative aspects of pain processing. SI is activated in roughly half of the studies, and the probability of obtaining SI activation appears related to the total amount of body surface stimulated (spatial summation) and probably also by temporal summation and attention to the stimulus. In a number of studies, the thalamic response was bilateral, probably reflecting generalised arousal in reaction to pain. ACC does not seem to be involved in coding stimulus intensity or location but appears to participate in both the affective and attentional concomitants of pain sensation, as well as in response selection. ACC subdivisions activated by painful stimuli partially overlap those activated in orienting and target detection tasks, but are distinct from those activated in tests involving sustained attention (Stroop, etc.). In addition to ACC, increased blood flow in the posterior parietal and prefrontal cortices is thought to reflect attentional and memory networks activated by noxious stimulation. Less noted but frequent activation concerns motor-related areas such as the striatum, cerebellum and supplementary motor area, as well as regions involved in pain control such as the periaqueductal grey. In patients, chronic spontaneous pain is associated with decreased resting rCBF in contralateral thalamus, which may be reverted by analgesic procedures. Abnormal pain evoked by innocuous stimuli (allodynia) has been associated with amplification of the thalamic, insular and SII responses, concomitant to a paradoxical CBF decrease in ACC. It is argued that imaging studies of allodynia should be encouraged in order to understand central reorganisations leading to abnormal cortical pain processing. A number of brain areas activated by acute pain, particularly the thalamus and anterior cingulate, also show increases in rCBF during analgesic procedures. Taken together, these data suggest that hemodynamic responses to pain reflect simultaneously the sensory, cognitive and affective dimensions of pain, and that the same structure may both respond to pain and participate in pain control. The precise biochemical nature of these mechanisms remains to be investigated.",
"title": ""
},
{
"docid": "f58a1a0d8cc0e2c826c911be4451e0df",
"text": "From an accessibility perspective, voice-controlled, home-based intelligent personal assistants (IPAs) have the potential to greatly expand speech interaction beyond dictation and screen reader output. To examine the accessibility of off-the-shelf IPAs (e.g., Amazon Echo) and to understand how users with disabilities are making use of these devices, we conducted two exploratory studies. The first, broader study is a content analysis of 346 Amazon Echo reviews that include users with disabilities, while the second study more specifically focuses on users with visual impairments, through interviews with 16 current users of home-based IPAs. Findings show that, although some accessibility challenges exist, users with a range of disabilities are using the Amazon Echo, including for unexpected cases such as speech therapy and support for caregivers. Richer voice-based applications and solutions to support discoverability would be particularly useful to users with visual impairments. These findings should inform future work on accessible voice-based IPAs.",
"title": ""
},
{
"docid": "0e3eaf955aa6d0199b4cee08198b6ae0",
"text": "Relevance Feedback has proven very effective for improving retrieval accuracy. A difficult yet important problem in all relevance feedback methods is how to optimally balance the original query and feedback information. In the current feedback methods, the balance parameter is usually set to a fixed value across all the queries and collections. However, due to the difference in queries and feedback documents, this balance parameter should be optimized for each query and each set of feedback documents.\n In this paper, we present a learning approach to adaptively predict the optimal balance coefficient for each query and each collection. We propose three heuristics to characterize the balance between query and feedback information. Taking these three heuristics as a road map, we explore a number of features and combine them using a regression approach to predict the balance coefficient. Our experiments show that the proposed adaptive relevance feedback is more robust and effective than the regular fixed-coefficient feedback.",
"title": ""
},
{
"docid": "0b231777fedf27659b4558aaabb872be",
"text": "Recognizing multiple mixed group activities from one still image is not a hard problem for humans but remains highly challenging for computer recognition systems. When modelling interactions among multiple units (i.e., more than two groups or persons), the existing approaches tend to divide them into interactions between pairwise units. However, no mathematical evidence supports this transformation. Therefore, these approaches’ performance is limited on images containing multiple activities. In this paper, we propose a generative model to provide a more reasonable interpretation for the mixed group activities contained in one image. We design a four level structure and convert the original intra-level interactions into inter-level interactions, in order to implement both interactions among multiple groups and interactions among multiple persons within a group. The proposed four-level structure makes our model more robust against the occlusion and overlap of the visible poses in images. Experimental results demonstrate that our model makes good interpretations for mixed group activities and outperforms the state-of-the-art methods on the Collective Activity Classification dataset.",
"title": ""
},
{
"docid": "8df304fdd0099a836a25414b0bbfb62f",
"text": "Ahtract -Diagrams are widely used in several areas of computer wience, and their effectiveness is thoroughly recognized. One of the main qualities requested for them is readability; this is especially, but not exclusively, true in the area of information systems, where diagrams are used to model data and functions of the application. Up to now, diagrams have been produced manually or with the aid of a graphic editor; in both caws placement of symbols and routing of connections are under responsibility of the designer. The goal of the work is to investigate how readability of diagrams can be achieved by means of automatic tools. Existing results in the literature are compared, and a comprehensive algorithmic approach to the problem is proposed. The algorithm presented draws graphs on a grid and is suitable for both undirected graphs and mixed graphs that contain as subgraphs hierarchic structures. Finally, several applications of a graphic tool that embodies the aforementioned facility are shown.",
"title": ""
},
{
"docid": "23b90259d48fe9792ee232aad4ca56be",
"text": "a r t i c l e i n f o Plate tectonics is a self-organizing global system driven by the negative buoyancy of the thermal boundary layer resulting in subduction. Although the signature of plate tectonics is recognized with some confidence in the Phanerozoic geological record of the continents, evidence for plate tectonics becomes less certain further back in time. To improve our understanding of plate tectonics on the Earth in the Precambrian we have to combine knowledge derived from the geological record with results from well-constrained numerical modeling. In a series of experiments using a 2D petrological–thermomechanical numerical model of oceanic subduction we have systematically investigated the dependence of tectono-metamorphic and magmatic regimes at an active plate margin on upper-mantle temperature, crustal radiogenic heat production, degree of lithospheric weakening and other parameters. We have identified a first-order transition from a \" no-subduction \" tectonic regime through a \" pre-subduction \" tectonic regime to the modern style of subduction. The first transition is gradual and occurs at upper-mantle temperatures between 250 and 200 K above the present-day values, whereas the second transition is more abrupt and occurs at 175–160 K. The link between geological observations and model results suggests that the transition to the modern plate tectonic regime might have occurred during the Mesoarchean–Neoarchean time (ca. 3.2–2.5 Ga). In the case of the \" pre-subduction \" tectonic regime (upper-mantle temperature 175–250 K above the present) the plates are weakened by intense percolation of melts derived from the underlying hot melt-bearing sub-lithospheric mantle. In such cases, convergence does not produce self-sustaining one-sided subduction, but rather results in shallow underthrusting of the oceanic plate under the continental plate. Further increase in the upper-mantle temperature (N 250 K above the present) causes a transition to a \" no-subduction \" regime where horizontal movements of small deformable plate fragments are accommodated by internal strain and even shallow underthrusts do not form under the imposed convergence. Thus, based on the results of the numerical modeling, we suggest that the crucial parameter controlling the tectonic regime is the degree of lithospheric weakening induced by emplacement of sub-lithospheric melts into the lithosphere. A lower melt flux at upper-mantle temperatures b 175–160 K results in a lesser degree of melt-related weakening leading to stronger plates, which stabilizes modern style subduction even at high mantle temperatures.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "b5e353041fa966132928698c4ad8ceb9",
"text": "We introduce the concept of dynamic image, a novel compact representation of videos useful for video analysis, particularly in combination with convolutional neural networks (CNNs). A dynamic image encodes temporal data such as RGB or optical flow videos by using the concept of ‘rank pooling’. The idea is to learn a ranking machine that captures the temporal evolution of the data and to use the parameters of the latter as a representation. We call the resulting representation dynamic image because it summarizes the video dynamics in addition to appearance. This powerful idea allows to convert any video to an image so that existing CNN models pre-trained with still images can be immediately extended to videos. We also present an efficient approximate rank pooling operator that runs two orders of magnitude faster than the standard ones with any loss in ranking performance and can be formulated as a CNN layer. To demonstrate the power of the representation, we introduce a novel four stream CNN architecture which can learn from RGB and optical flow frames as well as from their dynamic image representations. We show that the proposed network achieves state-of-the-art performance, 95.5 and 72.5 percent accuracy, in the UCF101 and HMDB51, respectively.",
"title": ""
},
{
"docid": "529e132a37f9fb37ddf04984236f4b36",
"text": "The first steps in analyzing defensive malware are understanding what obfuscations are present in real-world malware binaries, how these obfuscations hinder analysis, and how they can be overcome. While some obfuscations have been reported independently, this survey consolidates the discussion while adding substantial depth and breadth to it. This survey also quantifies the relative prevalence of these obfuscations by using the Dyninst binary analysis and instrumentation tool that was recently extended for defensive malware analysis. The goal of this survey is to encourage analysts to focus on resolving the obfuscations that are most prevalent in real-world malware.",
"title": ""
},
{
"docid": "4dc9360837b5793a7c322f5b549fdeb1",
"text": "Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. This paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines. Keywords—system monitoring, data mining, data clustering",
"title": ""
},
{
"docid": "37fc66892e1cf8c446fe4028f8d8f19c",
"text": "A survey of the literature reveals that image processing tools aimed at supplementing the art historian's toolbox are currently in the earliest stages of development. To jump-start the development of such methods, the Van Gogh and Kroller-Muller museums in The Netherlands agreed to make a data set of 101 high-resolution gray-scale scans of paintings within their collections available to groups of image processing researchers from several different universities. This article describes the approaches to brushwork analysis and artist identification developed by three research groups, within the framework of this data set.",
"title": ""
},
{
"docid": "4690fbbaa412557e3b1c516e9355c9f8",
"text": "JCO/APRIL 2004 M distalization in Class II cases has been accomplished with various functional appliances, including fixed interarch appliances, such as the Herbst* and Jasper Jumper,** and fixed intra-arch appliances. The Twin Force Bite Corrector (TFBC)*** is a new fixed intermaxillary appliance with a built-in constant force for Class II correction. This article presents two patients who were part of a long-term prospective study currently in progress at the University of Connecticut Department of Orthodontics. Each patient was treated with the TFBC to correct a skeletal Class II malocclusion due to a retrognathic mandible.",
"title": ""
}
] |
scidocsrr
|
c97b8e354390bb8aa02cc7683751fd4d
|
Text Summarization via Hidden Markov Models
|
[
{
"docid": "c0a67a4d169590fa40dfa9d80768ef09",
"text": "Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means. In the exploratory research described, the complete text of an article in machine-readable form i s scanned by a n IBM 704 data-processing machine and analyzed in accordance with a standard program. Statistical information derived from word frequency and distribution is used by the machine to compute a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the \" auto-abstract. \" Introduction",
"title": ""
}
] |
[
{
"docid": "d5ac5e10fc2cc61e625feb28fc9095b5",
"text": "Article history: Received 8 July 2016 Received in revised form 15 November 2016 Accepted 29 December 2016 Available online 25 January 2017 As part of the post-2015 United Nations sustainable development agenda, the world has its first urban sustainable development goal (USDG) “to make cities and human settlements inclusive, safe, resilient and sustainable”. This paper provides an overview of the USDG and explores some of the difficulties around using this goal as a tool for improving cities. We argue that challenges emerge around selecting the indicators in the first place and also around the practical use of these indicators once selected. Three main practical problems of indicator use include 1) the poor availability of standardized, open and comparable data 2) the lack of strong data collection institutions at the city scale to support monitoring for the USDG and 3) “localization” the uptake and context specific application of the goal by diverse actors in widely different cities. Adding to the complexity, the USDG conversation is taking place at the same time as the proliferation of a bewildering array of indicator systems at different scales. Prompted by technological change, debates on the “data revolution” and “smart city” also have direct bearing on the USDG. We argue that despite these many complexities and challenges, the USDG framework has the potential to encourage and guide needed reforms in our cities but only if anchored in local institutions and initiatives informed by open, inclusive and contextually sensitive data collection and monitoring. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b705b194b79133957662c018ea6b1c7a",
"text": "Skew detection has been an important part of the document recognition system. A lot of techniques already exists and has currently been developing for detection of skew of scanned document images. This paper describes the skew detection and correction of scanned document images written in Assamese language using the horizontal and vertical projection profile analysis and brings out the differences after implementation of both the techniques.",
"title": ""
},
{
"docid": "bba15d88edc2574dcb3b12a78c3b2d57",
"text": "Gaussian Processes (GPs) are widely used tools in statistics, machine learning, robotics, computer vision, and scientific computation. However, despite their popularity, they can be difficult to apply; all but the simplest classification or regression applications require specification and inference over complex covariance functions that do not admit simple analytical posteriors. This paper shows how to embed Gaussian processes in any higherorder probabilistic programming language, using an idiom based on memoization, and demonstrates its utility by implementing and extending classic and state-of-the-art GP applications. The interface to Gaussian processes, called gpmem, takes an arbitrary real-valued computational process as input and returns a statistical emulator that automatically improve as the original process is invoked and its input-output behavior is recorded. The flexibility of gpmem is illustrated via three applications: (i) Robust GP regression with hierarchical hyper-parameter learning, (ii) discovering symbolic expressions from time-series data by fully Bayesian structure learning over kernels generated by a stochastic grammar, and (iii) a bandit formulation of Bayesian optimization with automatic inference and action selection. All applications share a single 50-line Python library and require fewer than 20 lines of probabilistic code each.",
"title": ""
},
{
"docid": "8adcbd916e99e63d5dcebf58f19e2e05",
"text": "Cloud computing is still a juvenile and most dynamic field characterized by a buzzing IT industry. Virtually every industry and even some parts of the public sector are taking on cloud computing today, either as a provider or as a consumer. It has now become essentially an inseparable part of everyone's life. The cloud thus has become a part of the critical global infrastructure but is unique in that it has no customary borders to safeguard it from attacks. Once weakened these web servers can serve as a launching point for conducting further attacks against users in the cloud. One such attack is the DoS or its version DDOS attack. Distributed Denial of Service (DdoS) Attacks have recently emerged as one of the most newsworthy, if not the greatest weaknesses of the Internet. DDoS attacks cause economic losses due to the unavailability of services and potentially serious security problems due to incapacitation of critical infrastructures. This paper presents a simple distance estimation based technique to detect and prevent the cloud from flooding based DDoS attack and thereby protect other servers and users from its adverse effects.",
"title": ""
},
{
"docid": "406b1d13ecc9c9097079c8a24c15a332",
"text": "We propose an automated breast cancer triage CAD system using machine vision on low-cost, portable ultrasound imaging devices. We demonstrate that the triage CAD software can effectively analyze images captured by minimally-trained operators and output one of three assessments - benign, probably benign (6-month follow-up recommended) and suspicious (biopsy recommended). This system opens up the possibility of offering practical, cost-effective breast cancer diagnosis for symptomatic women in economically developing countries.",
"title": ""
},
{
"docid": "ee505ad2cc262881e46e6b119111f833",
"text": "This paper presents a novel hardware-oriented image compression algorithm and its very large-scale integration (VLSI) implementation for wireless sensor networks. The proposed novel image compression algorithm consists of a fuzzy decision, block partition, digital halftoning, and block truncation coding (BTC) techniques. A novel variable-size block partition technique was used in the proposed algorithm to improve image quality and compression performance. In addition, eight different types of blocks were encoded by Huffman coding according to probability to increase the compression ratio further. In order to achieve the low-cost and low-power characteristics, a novel iteration-based BTC training module was created to get representative levels and meet the requirement of wireless sensor networks. A prediction and a modified Golomb–Rice coding modules were designed to encode the information of representative levels to achieve higher compression performance. The proposed algorithm was realized by a VLSI technique with an UMC 0.18- $\\mu \\text{m}$ CMOS process. The synthesized gate counts and core area of this design were 6.4 k gate counts and 60 000 $\\mu \\text{m}^{2}$ , respectively. The operating frequency and power consumption were 100 MHz and 3.11 mW respectively. Compared with previous JPEG, JPEG-LS, and fixed-size BTC-based designs, this work reduced 20.9% gate counts more than previous designs. Moreover, the proposed design required only a one-line-buffer memory rather than a frame-buffer memory required by previous designs.",
"title": ""
},
{
"docid": "1cdd599b49d9122077a480a75391aae8",
"text": "Two aspects of children's early gender development-the spontaneous production of gender labels and gender-typed play-were examined longitudinally in a sample of 82 children. Survival analysis, a statistical technique well suited to questions involving developmental transitions, was used to investigate the timing of the onset of children's gender labeling as based on mothers' biweekly telephone interviews regarding their children's language from 9 through 21 months. Videotapes of children's play both alone and with mother during home visits at 17 and 21 months were independently analyzed for play with gender-stereotyped and gender-neutral toys. Finally, the relation between gender labeling and gender-typed play was examined. Children transitioned to using gender labels at approximately 19 months, on average. Although girls and boys showed similar patterns in the development of gender labeling, girls began labeling significantly earlier than boys. Modest sex differences in play were present at 17 months and increased at 21 months. Gender labeling predicted increases in gender-typed play, suggesting that knowledge of gender categories might influence gender typing before the age of 2.",
"title": ""
},
{
"docid": "f3bc3e8c34574be5db727acc1aa72e64",
"text": "In this paper we investigate possible ways to improve the energy efficiency of a general purpose microprocessor. We show that the energy of a processor depends on its performance, so we chose the energy-delay product to compare different processors. To improve the energy-delay product we explore methods of reducing energy consumption that do not lead to performance loss (i.e., wasted energy), and explore methods to reduce delay by exploiting instruction level parallelism. We found that careful design reduced the energy dissipation by almost 25%. Pipelining can give approximately a 2x improvement in energydelay product. Superscalar issue, however, does not improve the energy-delay product any further since the overhead required offsets the gains in performance. Further improvements will be hard to come by since a large fraction of the energy (5040%) is dissipated in the clock network and the on-chip memories. Thus, the efficiency of processors will depend more on the technology being used and the algorithm chosen by the programmer than the micro-architecture.",
"title": ""
},
{
"docid": "760edd83045a80dbb2231c0ffbef2ea7",
"text": "This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e. what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at https://github.com/zqs1022/interpretableCNN.",
"title": ""
},
{
"docid": "051d3f7148f72827f2042796b01ae8e5",
"text": "Many stochastic optimization algorithms work by estimating the gradient of the cost function on the fly by sampling datapoints uniformly at random from a training set. However, the estimator might have a large variance, which inadvertantly slows down the convergence rate of the algorithms. One way to reduce this variance is to sample the datapoints from a carefully selected non-uniform distribution. In this work, we propose a novel non-uniform sampling approach that uses the multiarmed bandit framework. Theoretically, we show that our algorithm asymptotically approximates the optimal variance within a factor of 3. Empirically, we show that using this datapoint-selection technique results in a significant reduction of the convergence time and variance of several stochastic optimization algorithms such as SGD and SAGA. This approach for sampling datapoints is general, and can be used in conjunction with any algorithm that uses an unbiased gradient estimation – we expect it to have broad applicability beyond the specific examples explored in this work.",
"title": ""
},
{
"docid": "2ea886246d4f59d88c3eabd99c60dd5d",
"text": "This paper proposes a Modified Particle Swarm Optimization with Time Varying Acceleration Coefficients (MPSO-TVAC) for solving economic load dispatch (ELD) problem. Due to prohibited operating zones (POZ) and ramp rate limits of the practical generators, the ELD problems become nonlinear and nonconvex optimization problem. Furthermore, the ELD problem may be more complicated if transmission losses are considered. Particle swarm optimization (PSO) is one of the famous heuristic methods for solving nonconvex problems. However, this method may suffer to trap at local minima especially for multimodal problem. To improve the solution quality and robustness of PSO algorithm, a new best neighbour particle called ‘rbest’ is proposed. The rbest provides extra information for each particle that is randomly selected from other best particles in order to diversify the movement of particle and avoid premature convergence. The effectiveness of MPSO-TVAC algorithm is tested on different power systems with POZ, ramp-rate limits and transmission loss constraints. To validate the performances of the proposed algorithm, comparative studies have been carried out in terms of convergence characteristic, solution quality, computation time and robustness. Simulation results found that the proposed MPSO-TVAC algorithm has good solution quality and more robust than other methods reported in previous work.",
"title": ""
},
{
"docid": "06651dad0fd00b7f93b7ed2230d8bdbc",
"text": "A proof of retrievability (POR) is a compact proof by a file system (prover) to a client (verifier) that a target file F is intact, in the sense that the client can fully recover it. As PORs incur lower communication complexity than transmission of F itself, they are an attractive building block for high-assurance remote storage systems.\n In this paper, we propose a theoretical framework for the design of PORs. Our framework improves the previously proposed POR constructions of Juels-Kaliski and Shacham-Waters, and also sheds light on the conceptual limitations of previous theoretical models for PORs. It supports a fully Byzantine adversarial model, carrying only the restriction---fundamental to all PORs---that the adversary's error rate be bounded when the client seeks to extract F. We propose a new variant on the Juels-Kaliski protocol and describe a prototype implementation. We demonstrate practical encoding even for files F whose size exceeds that of client main memory.",
"title": ""
},
{
"docid": "4b1948d0b09047baf27b95f5b416c8e7",
"text": "Recently, several pattern recognition methods have been proposed to automatically discriminate between patients with and without Alzheimer's disease using different imaging modalities: sMRI, fMRI, PET and SPECT. Classical approaches in visual information retrieval have been successfully used for analysis of structural MRI brain images. In this paper, we use the visual indexing framework and pattern recognition analysis based on structural MRI data to discriminate three classes of subjects: normal controls (NC), mild cognitive impairment (MCI) and Alzheimer's disease (AD). The approach uses the circular harmonic functions (CHFs) to extract local features from the most involved areas in the disease: hippocampus and posterior cingulate cortex (PCC) in each slice in all three brain projections. The features are quantized using the Bag-of-Visual-Words approach to build one signature by brain (subject). This yields a transformation of a full 3D image of brain ROIs into a 1D signature, a histogram of quantized features. To reduce the dimensionality of the signature, we use the PCA technique. Support vector machines classifiers are then applied to classify groups. The experiments were conducted on a subset of ADNI dataset and applied to the \"Bordeaux-3City\" dataset. The results showed that our approach achieves respectively for ADNI dataset and \"Bordeaux-3City\" dataset; for AD vs NC classification, an accuracy of 83.77% and 78%, a specificity of 88.2% and 80.4% and a sensitivity of 79.09% and 74.7%. For NC vs MCI classification we achieved for the ADNI datasets an accuracy of 69.45%, a specificity of 74.8% and a sensitivity of 62.52%. For the most challenging classification task (AD vs MCI), we reached an accuracy of 62.07%, a specificity of 75.15% and a sensitivity of 49.02%. The use of PCC visual features description improves classification results by more than 5% compared to the use of hippocampus features only. Our approach is automatic, less time-consuming and does not require the intervention of the clinician during the disease diagnosis.",
"title": ""
},
{
"docid": "fa2e8f411d74030bbec7937114f88f35",
"text": "We present a method for synthesizing a frontal, neutralexpression image of a person’s face given an input face photograph. This is achieved by learning to generate facial landmarks and textures from features extracted from a facial-recognition network. Unlike previous generative approaches, our encoding feature vector is largely invariant to lighting, pose, and facial expression. Exploiting this invariance, we train our decoder network using only frontal, neutral-expression photographs. Since these photographs are well aligned, we can decompose them into a sparse set of landmark points and aligned texture maps. The decoder then predicts landmarks and textures independently and combines them using a differentiable image warping operation. The resulting images can be used for a number of applications, such as analyzing facial attributes, exposure and white balance adjustment, or creating a 3-D avatar.",
"title": ""
},
{
"docid": "10c861ca1bdd7133d05f659efc7c9874",
"text": "Based on land use and land cover (LULC) datasets in the late 1970s, the early 1990s, 2004 and 2012, we analyzed characteristics of LULC change in the headwaters of the Yangtze River and Yellow River over the past 30 years contrastively, using the transition matrix and LULC change index. The results showed that, in 2012, the LULC in the headwaters of the Yellow River were different compared to those of the headwaters of the Yangtze River, with more grassland and wetand marshland. In the past 30 years, the grassland and wetand marshland increasing at the expense of sand, gobi, and bare land and desert were the main LULC change types in the headwaters of the Yangtze River, with the macro-ecological situation experiencing a process of degeneration, slight melioration, and continuous melioration, in that order. In the headwaters of the Yellow River, severe reduction of grassland coverage, shrinkage of wetand marshland and the consequential expansion of sand, gobi and bare land were noticed. The macro-ecological situation experienced a process of degeneration, obvious degeneration, and slight melioration, in that order, and the overall change in magnitude was more dramatic than that in the headwaters of the Yangtze River. These different LULC change courses were jointly driven by climate change, grassland-grazing pressure, and the implementation of ecological construction projects.",
"title": ""
},
{
"docid": "ec90e30c0ae657f25600378721b82427",
"text": "We use deep max-pooling convolutional neural networks to detect mitosis in breast histology images. The networks are trained to classify each pixel in the images, using as context a patch centered on the pixel. Simple postprocessing is then applied to the network output. Our approach won the ICPR 2012 mitosis detection competition, outperforming other contestants by a significant margin.",
"title": ""
},
{
"docid": "a06274d9bf6dba90ea0178ec11a20fb6",
"text": "Osteoporosis has become one of the most prevalent and costly diseases in the world. It is a metabolic disease characterized by reduction in bone mass due to an imbalance between bone formation and resorption. Osteoporosis causes fractures, prolongs bone healing, and impedes osseointegration of dental implants. Its pathological features include osteopenia, degradation of bone tissue microstructure, and increase of bone fragility. In traditional Chinese medicine, the herb Rhizoma Drynariae has been commonly used to treat osteoporosis and bone nonunion. However, the precise underlying mechanism is as yet unclear. Osteoprotegerin is a cytokine receptor shown to play an important role in osteoblast differentiation and bone formation. Hence, activators and ligands of osteoprotegerin are promising drug targets and have been the focus of studies on the development of therapeutics against osteoporosis. In the current study, we found that naringin could synergistically enhance the action of 1α,25-dihydroxyvitamin D3 in promoting the secretion of osteoprotegerin by osteoblasts in vitro. In addition, naringin can also influence the generation of osteoclasts and subsequently bone loss during organ culture. In conclusion, this study provides evidence that natural compounds such as naringin have the potential to be used as alternative medicines for the prevention and treatment of osteolysis.",
"title": ""
},
{
"docid": "8090121a59c1070aacc7a20941898551",
"text": "In this article, I explicitly solve dynamic portfolio choice problems, up to the solution of an ordinary differential equation (ODE), when the asset returns are quadratic and the agent has a constant relative risk aversion (CRRA) coefficient. My solution includes as special cases many existing explicit solutions of dynamic portfolio choice problems. I also present three applications that are not in the literature. Application 1 is the bond portfolio selection problem when bond returns are described by ‘‘quadratic term structure models.’’ Application 2 is the stock portfolio selection problem when stock return volatility is stochastic as in Heston model. Application 3 is a bond and stock portfolio selection problem when the interest rate is stochastic and stock returns display stochastic volatility. (JEL G11)",
"title": ""
},
{
"docid": "fd050993a4f3cfa4557db0f5f1862500",
"text": "Most modern research on the effects of feedback during learning has assumed that feedback is an error correction mechanism. Recent studies of feedback-timing effects have suggested that feedback might also strengthen initially correct responses. In an experiment involving cued recall of trivia facts, we directly tested several theories of feedback-timing effects and also examined the effects of restudy and retest trials following immediate and delayed feedback. Results were not consistent with theories assuming that the only function of feedback is to correct initial errors but instead supported a theoretical account assuming that delaying feedback strengthens initially correct responses due to the spacing of encoding opportunities: Delaying feedback increased the probability of correct response perseveration on the final retention test but had minimal effects on error correction or error perseveration probabilities. In a 2nd experiment, the effects of varying the lags between study, test, and feedback trials during learning provided further support for the spacing hypothesis. (PsycINFO Database Record (c) 2009 APA, all rights reserved).",
"title": ""
},
{
"docid": "72108944c9dfbb4a50da07aea41d22f5",
"text": "This study examined the perception of drug abuse amongst Nigerian undergraduates living off-campus. Students were surveyed at the Lagos State University, Ojo, allowing for a diverse sample that included a large percentage of the students from different faculties and departments. The undergraduate students were surveyed with a structured self-reporting anonymous questionnaire modified and adapted from the WHO student drug survey proforma. Of the 1000 students surveyed, a total of 807 responded to the questionnaire resulting in 80.7% response rate. Majority (77.9%) of the students were aged 19-30 years and unmarried. Six hundred and ninety eight (86.5%) claimed they were aware of drug abuse, but contrarily they demonstrated poor knowledge and awareness. Marijuana, 298 (45.7%) was the most common drug of abuse seen by most of the students. They were unable to identify very well the predisposing factors to drug use and the attending risks. Two hundred and sixty six (33.0%) students were currently taking one or more drugs of abuse. Coffee (43.1%) was the most commonly used drug, followed by alcohol (25.8%) and marijuana (7.4%). Despite chronic use of these drugs (5 years and above), addiction is not a common finding. The study also revealed the poor attitudes of the undergraduates to drug addicts even after rehabilitation. It was therefore concluded that the awareness, knowledge, practices and attitudes of Nigerian undergraduates towards drug abuse is very poor. Considerably more research is needed to develop effective prevention strategy that combines school-based interventions with those affecting the family, social institutions and the larger community.",
"title": ""
}
] |
scidocsrr
|
484a8bae8e9e1313beb4986e0f736163
|
M-net: A Convolutional Neural Network for deep brain structure segmentation
|
[
{
"docid": "3342e2f79a6bb555797224ac4738e768",
"text": "Regions in three-dimensional magnetic resonance (MR) brain images can be classified using protocols for manually segmenting and labeling structures. For large cohorts, time and expertise requirements make this approach impractical. To achieve automation, an individual segmentation can be propagated to another individual using an anatomical correspondence estimate relating the atlas image to the target image. The accuracy of the resulting target labeling has been limited but can potentially be improved by combining multiple segmentations using decision fusion. We studied segmentation propagation and decision fusion on 30 normal brain MR images, which had been manually segmented into 67 structures. Correspondence estimates were established by nonrigid registration using free-form deformations. Both direct label propagation and an indirect approach were tested. Individual propagations showed an average similarity index (SI) of 0.754+/-0.016 against manual segmentations. Decision fusion using 29 input segmentations increased SI to 0.836+/-0.009. For indirect propagation of a single source via 27 intermediate images, SI was 0.779+/-0.013. We also studied the effect of the decision fusion procedure using a numerical simulation with synthetic input data. The results helped to formulate a model that predicts the quality improvement of fused brain segmentations based on the number of individual propagated segmentations combined. We demonstrate a practicable procedure that exceeds the accuracy of previous automatic methods and can compete with manual delineations.",
"title": ""
},
{
"docid": "6df12ee53551f4a3bd03bca4ca545bf1",
"text": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.",
"title": ""
}
] |
[
{
"docid": "db54705e3d975b6abba54a854e3e1158",
"text": "Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as \"modularity\" over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets.",
"title": ""
},
{
"docid": "c84d9280eeff09a472ed7d06c2fc3477",
"text": "This paper presents a novel feature selection approach to deal with issues of high dimensionality in biomedical data classification. Extensive research has been performed in the field of pattern recognition and machine learning. Dozens of feature selection methods have been developed in the literature, which can be classified into three main categories: filter, wrapper and hybrid approaches. Filter methods apply an independent test without involving any learning algorithm, while wrapper methods require a predetermined learning algorithm for feature subset evaluation. Filter and wrapper methods have their, respectively, drawbacks and are complementary to each other in that filter approaches have low computational cost with insufficient reliability in classification while wrapper methods tend to have superior classification accuracy but require great computational power. The approach proposed in this paper integrates filter and wrapper methods into a sequential search procedure with the aim to improve the classification performance of the features selected. The proposed approach is featured by (1) adding a pre-selection step to improve the effectiveness in searching the feature subsets with improved classification performances and (2) using Receiver Operating Characteristics (ROC) curves to characterize the performance of individual features and feature subsets in the classification. Compared with the conventional Sequential Forward Floating Search (SFFS), which has been considered as one of the best feature selection methods in the literature, experimental results demonstrate that (i) the proposed approach is able to select feature subsets with better classification performance than the SFFS method and (ii) the integrated feature pre-selection mechanism, by means of a new selection criterion and filter method, helps to solve the over-fitting problems and reduces the chances of getting a local optimal solution.",
"title": ""
},
{
"docid": "b53e02dd4b59dce98d1e1bdd7982dd33",
"text": "The validity of a developmentally based life-stress model of depression was evaluated in 88 clinic-referred youngsters. The model focused on (a) the role of child-environment transactions, (b) the specificity of stress-psychopathology relations, and (c) the consideration of both episodic and chronic stress. Semistructured diagnostic and life-stress interviews were administered to youngsters and their parents. As predicted, in the total sample child depression was associated with interpersonal episodic and chronic stress, whereas externalizing disorder was associated with noninterpersonal episodic and chronic stress. However, the pattern of results differed somewhat in boys and girls. Youngsters with comorbid depression and externalizing disorder tended to experience the highest stress levels. Support was obtained for a stress-generation model of depression, wherein children precipitate stressful events and circumstances. In fact, stress that was in part dependent on children's contribution distinguished best among diagnostic groups, whereas independent stress had little discriminative power. Results suggest that life-stress research may benefit from the application of transactional models of developmental psychopathology, which consider how children participate in the construction of stressful environments.",
"title": ""
},
{
"docid": "94d7144fb4d3e1ebf9ad5e52fd7b5918",
"text": "Regression testing is a crucial part of software development. It checks that software changes do not break existing functionality. An important assumption of regression testing is that test outcomes are deterministic: an unmodified test is expected to either always pass or always fail for the same code under test. Unfortunately, in practice, some tests often called flaky tests—have non-deterministic outcomes. Such tests undermine the regression testing as they make it difficult to rely on test results. We present the first extensive study of flaky tests. We study in detail a total of 201 commits that likely fix flaky tests in 51 open-source projects. We classify the most common root causes of flaky tests, identify approaches that could manifest flaky behavior, and describe common strategies that developers use to fix flaky tests. We believe that our insights and implications can help guide future research on the important topic of (avoiding) flaky tests.",
"title": ""
},
{
"docid": "4d987e2c0f3f49609f70149460201889",
"text": "Estimating count and density maps from crowd images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning. In addition, techniques developed for crowd counting can be applied to related tasks in other fields of study such as cell microscopy, vehicle counting and environmental survey. The task of crowd counting and density map estimation is riddled with many challenges such as occlusions, non-uniform density, intra-scene and inter-scene variations in scale and perspective. Nevertheless, over the last few years, crowd count analysis has evolved from earlier methods that are often limited to small variations in crowd density and scales to the current state-of-the-art methods that have developed the ability to perform successfully on a wide range of scenarios. The success of crowd counting methods in the recent years can be largely attributed to deep learning and publications of challenging datasets. In this paper, we provide a comprehensive survey of recent Convolutional Neural Network (CNN) based approaches that have demonstrated significant improvements over earlier methods that rely largely on hand-crafted representations. First, we briefly review the pioneering methods that use hand-crafted representations and then we delve in detail into the deep learning-based approaches and recently published datasets. Furthermore, we discuss the merits and drawbacks of existing CNN-based approaches and identify promising avenues of research in this rapidly evolving field. c © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cbc0e3dff1d86d88c416b1119fd3da82",
"text": "One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, ar X iv :1 71 2. 02 05 2v 1 [ cs .R O ] 6 D ec 2 01 7 and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.",
"title": ""
},
{
"docid": "40b5929886bc0b924ff2de9ad788f515",
"text": "Accurate measurement of the rotor angle and speed of synchronous generators is instrumental in developing powerful local or wide-area control and monitoring systems to enhance power grid stability and reliability. Exogenous input signals such as field voltage and mechanical torque are critical information in this context, but obtaining them raises significant logistical challenges, which in turn complicates the estimation of the generator dynamic states from easily available terminal phasor measurement unit (PMU) signals only. To overcome these issues, the authors of this paper employ the extended Kalman filter with unknown inputs, referred to as the EKF-UI technique, for decentralized dynamic state estimation of a synchronous machine states using terminal active and reactive powers, voltage phasor and frequency measurements. The formulation is fully decentralized without single-machine infinite bus (SMIB) or excitation model assumption so that only local information is required. It is demonstrated that using the decentralized EKF-UI scheme, synchronous machine states can be estimated accurately enough to enable wide-area power system stabilizers (WA-PSS) and system integrity protection schemes (SIPS). Simulation results on New-England test system, Hydro-Québec simplified system, and Kundur network highlight the efficiency of the proposed method under fault conditions with electromagnetic transients and full-order generator models in realistic multi-machine setups.",
"title": ""
},
{
"docid": "f9b3813d806e93cc0a88143c89cd1379",
"text": "Deep Neural Networks (DNNs) are very popular these days, and are the subject of a very intense investigation. A DNN is made up of layers of internal units (or neurons), each of which computes an affine combination of the output of the units in the previous layer, applies a nonlinear operator, and outputs the corresponding value (also known as activation). A commonly-used nonlinear operator is the so-called rectified linear unit (ReLU), whose output is just the maximum between its input value and zero. In this (and other similar cases like max pooling, where the max operation involves more than one input value), for fixed parameters one can model the DNN as a 0-1 Mixed Integer Linear Program (0-1 MILP) where the continuous variables correspond to the output values of each unit, and a binary variable is associated with each ReLU to model its yes/no nature. In this paper we discuss the peculiarity of this kind of 0-1 MILP models, and describe an effective bound-tightening technique intended to ease its solution. We also present possible applications of the 0-1 MILP model arising in feature visualization and in the construction of adversarial examples. Computational results are reported, aimed at investigating (on small DNNs) the computational performance of a state-of-the-art MILP solver when applied to a known test case, namely, hand-written digit recognition.",
"title": ""
},
{
"docid": "4854e5db737553f0186d2b12a374294e",
"text": "The HFC model for parallel evolutionary computation is inspired by the stratified competition often seen in society and biology. Subpopulations are stratified by fitness. Individuals move from low-fitness to higher-fitness subpopulations if and only if they exceed the fitness-based admission threshold of the receiving subpopulation, but not of a higher one. The HFC model implements several critical features of a competent parallel evolutionary computation model, simultaneously and naturally, allowing rapid exploitation while impeding premature convergence. The AHFC model is an adaptive version of HFC, extending it by allowing the admission thresholds of fitness levels to be determined dynamically by the evolution process itself. The effectiveness of the Adaptive HFC model is compared with the HFC model on a genetic programming-based evolutionary synthesis example.",
"title": ""
},
{
"docid": "8adb07a99940383139f0d4ed32f68f7c",
"text": "The gene ASPM (abnormal spindle-like microcephaly associated) is a specific regulator of brain size, and its evolution in the lineage leading to Homo sapiens was driven by strong positive selection. Here, we show that one genetic variant of ASPM in humans arose merely about 5800 years ago and has since swept to high frequency under strong positive selection. These findings, especially the remarkably young age of the positively selected variant, suggest that the human brain is still undergoing rapid adaptive evolution.",
"title": ""
},
{
"docid": "7ea89697894cb9e0da5bfcebf63be678",
"text": "This paper develops a frequency-domain iterative machine learning (IML) approach for output tracking. Frequency-domain iterative learning control allows bounded noncausal inversion of system dynamics and is, therefore, applicable to nonminimum phase systems. The model used in the frequency-domain control update can be obtained from the input–output data acquired during the iteration process. However, such data-based approaches can have challenges if the noise-to-output-signal ratio is large. The main contribution of this paper is the use of kernel-based machine learning during the iterations to estimate both the model (and its inverse) for the control update, as well as the model uncertainty needed to establish bounds on the iteration gain for ensuring convergence. Another contribution is the proposed use of augmented inputs with persistency of excitation to promote learning of the model during iterations. The improved model can be used to better infer the inverse input resulting in lower initial error for new output trajectories. The proposed IML approach with the augmented input is illustrated with simulations for a benchmark nonminimum phase example.",
"title": ""
},
{
"docid": "ebb68e2067fe684756514ce61871a820",
"text": "Ž . Ž PLS-regression PLSR is the PLS approach in its simplest, and in chemistry and technology, most used form two-block . predictive PLS . PLSR is a method for relating two data matrices, X andY, by a linear multivariate model, but goes beyond traditional regression in that it models also the structure of X andY. PLSR derives its usefulness from its ability to analyze data with many, noisy, collinear, and even incomplete variables in both X andY. PLSR has the desirable property that the precision of the model parameters improves with the increasing number of relevant variables and observations. This article reviews PLSR as it has developed to become a standard tool in chemometrics and used in chemistry and engineering. The underlying model and its assumptions are discussed, and commonly used diagnostics are reviewed together with the interpretation of resulting parameters. Ž . Two examples are used as illustrations: First, a Quantitative Structure–Activity Relationship QSAR rQuantitative StrucŽ . ture–Property Relationship QSPR data set of peptides is used to outline how to develop, interpret and refine a PLSR model. Second, a data set from the manufacturing of recycled paper is analyzed to illustrate time series modelling of process data by means of PLSR and time-lagged X-variables. q2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "90fdac33a73d1615db1af0c94016da5b",
"text": "AIM OF THE STUDY\nThe purpose of this study was to define antidiabetic effects of fruit of Vaccinium arctostaphylos L. (Ericaceae) which is traditionally used in Iran for improving of health status of diabetic patients.\n\n\nMATERIALS AND METHODS\nFirstly, we examined the effect of ethanolic extract of Vaccinium arctostaphylos fruit on postprandial blood glucose (PBG) after 1, 3, 5, 8, and 24h following a single dose administration of the extract to alloxan-diabetic male Wistar rats. Also oral glucose tolerance test was carried out. Secondly, PBG was measured at the end of 1, 2 and 3 weeks following 3 weeks daily administration of the extract. At the end of treatment period the pancreatic INS and cardiac GLUT-4 mRNA expression and also the changes in the plasma lipid profiles and antioxidant enzymes activities were assessed. Finally, we examined the inhibitory activity of the extract against rat intestinal α-glucosidase.\n\n\nRESULTS\nThe obtained results showed mild acute (18%) and also significant chronic (35%) decrease in the PBG, significant reduction in triglyceride (47%) and notable rising of the erythrocyte superoxide dismutase (57%), glutathione peroxidase (35%) and catalase (19%) activities due to treatment with the extract. Also we observed increased expression of GLUT-4 and INS genes in plant extract treated Wistar rats. Furthermore, in vitro studies displayed 47% and 56% inhibitory effects of the extract on activity of intestinal maltase and sucrase enzymes, respectively.\n\n\nCONCLUSIONS\nFindings of this study allow us to establish scientifically Vaccinium arctostaphylos fruit as a potent antidiabetic agent with antihyperglycemic, antioxidant and triglyceride lowering effects.",
"title": ""
},
{
"docid": "5e59888b6e0c562d546618dd95fa00b8",
"text": "The massive acceleration of the nitrogen cycle as a result of the production and industrial use of artificial nitrogen fertilizers worldwide has enabled humankind to greatly increase food production, but it has also led to a host of environmental problems, ranging from eutrophication of terrestrial and aquatic systems to global acidification. The findings of many national and international research programmes investigating the manifold consequences of human alteration of the nitrogen cycle have led to a much improved understanding of the scope of the anthropogenic nitrogen problem and possible strategies for managing it. Considerably less emphasis has been placed on the study of the interactions of nitrogen with the other major biogeochemical cycles, particularly that of carbon, and how these cycles interact with the climate system in the presence of the ever-increasing human intervention in the Earth system. With the release of carbon dioxide (CO2) from the burning of fossil fuels pushing the climate system into uncharted territory, which has major consequences for the functioning of the global carbon cycle, and with nitrogen having a crucial role in controlling key aspects of this cycle, questions about the nature and importance of nitrogen–carbon–climate interactions are becoming increasingly pressing. The central question is how the availability of nitrogen will affect the capacity of Earth’s biosphere to continue absorbing carbon from the atmosphere (see page 289), and hence continue to help in mitigating climate change. Addressing this and other open issues with regard to nitrogen–carbon–climate interactions requires an Earth-system perspective that investigates the dynamics of the nitrogen cycle in the context of a changing carbon cycle, a changing climate and changes in human actions.",
"title": ""
},
{
"docid": "d15804e98b58fa5ec0985c44f6bb6033",
"text": "Urrently, the most successful learning models in computer vision are based on learning successive representations followed by a decision layer. This is usually actualized through feedforward multilayer neural networks, e.g. ConvNets, where each layer forms one of such successive representations. However, an alternative that can achieve the same goal is a feedback based approach in which the representation is formed in an iterative manner based on a feedback received from previous iterations output. We establish that a feedback based approach has several core advantages over feedforward: it enables making early predictions at the query time, its output naturally conforms to a hierarchical structure in the label space (e.g. a taxonomy), and it provides a new basis for Curriculum Learning. We observe that feedback develops a considerably different representation compared to feedforward counterparts, in line with the aforementioned advantages. We provide a general feedback based learning architecture, instantiated using existing RNNs, with the endpoint results on par or better than existing feedforward networks and the addition of the above advantages.",
"title": ""
},
{
"docid": "295212e614cc361b1a5fdd320d39f68b",
"text": "Aiming to meet the explosive growth of mobile data traffic and reduce the network congestion, we study Time Dependent Adaptive Pricing (TDAP) with threshold policies to motivate users to shift their Internet access from peak hours to off-peak hours. With the proposed TDAP scheme, Internet Service Providers (ISPs) will be able to use less network capacity to provide users Internet access service with the same QoS. Simulation and analysis are carried out to investigate the performance of the proposed TDAP scheme based on the real Internet traffic pattern.",
"title": ""
},
{
"docid": "9fa6367abccea6257eebe73e18024103",
"text": "We present a novel technique for image set based face/object recognition, where each gallery and query example contains a face/object image set captured from different viewpoints, background, facial expressions, resolution and illumination levels. While several image set classification approaches have been proposed in recent years, most of them represent each image set as a single linear subspace, assumptions in regards to the specific category of the geometric surface on which images of the set are believed to lie. This could result in a loss of discriminative information for classification. This paper alleviates these limitations by proposing an Iterative Deep Learning Model (IDLM) that automatically and hierarchically learns discriminative representations from raw face and object images. In the proposed approach, low level translationally invariant features are learnt by the Pooled Convolutional Layer (PCL). The latter is followed by Artificial Neural Networks (ANNs) applied iteratively in a hierarchical fashion to learn a discriminative non-linear feature representation of the input image sets. The proposed technique was extensively evaluated for the task of image set based face and object recognition on YouTube Celebrities, Honda/UCSD, CMU Mobo and ETH-80 (object) dataset, respectively. Experimental results and comparisons with state-of-the-art methods show that our technique achieves the best performance on all these datasets. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d17d48c7724d8a7243b12719f74f4e02",
"text": "We are living in a world which is undergoing profound changes brought about by rapid advances in science and technology. Among such changes, the most visible are those that relate to what is popularly referred to as the information revolution. The artifacts of this revolution are all around us: the e-mail, the world wide web, the cellular phone; the fax; and the desktop computer, among many others. Linked to the information revolution is another revolution — the intelligent systems revolution. The manifestations of this revolution are not as obvious as those of the information revolution because they involve, for the most part, not new products but higher MIQ (Machine IQ) of existing systems, products and devices. Among the familiar examples are smart appliances, smart cameras, smart robots and smart software for browsing, diagnosis, fraud detection and quality control. The information and intelligent systems revolutions are in a symbiotic relationship. Intelligence requires information and vice-versa. The confluence of intelligent systems and information systems leads to intelligent information systems. In this sense, the union of information systems, intelligent systems and intelligent information systems constitutes what might be referred to as information/intelligent systems, or I/IS for short. In my perception, in coming years, the design, construction and utilization of information/intelligent systems will become the primary focus of science and technology, and I/IS systems will become a dominant presence in our daily lives. When we take a closer look at information/intelligent systems what we see is the increasingly important role of soft computing (SC) in their conception, design and utilization. Basically, soft computing is an association of computing methodologies which includes as its principal members fuzzy",
"title": ""
},
{
"docid": "0cb0c5f181ef357cd81d4a290d2cbc14",
"text": "With 3D sensing becoming cheaper, environment-aware and visually-guided robot arms capable of safely working in collaboration with humans will become common. However, a reliable calibration is needed, both for camera internal calibration, as well as Eye-to-Hand calibration, to make sure the whole system functions correctly. We present a framework, using a novel combination of well proven methods, allowing a quick automatic calibration for the integration of systems consisting of the robot and a varying number of 3D cameras by using a standard checkerboard calibration grid. Our approach allows a quick camera-to-robot recalibration after any changes to the setup, for example when cameras or robot have been repositioned. Modular design of the system ensures flexibility regarding a number of sensors used as well as different hardware choices. The framework has been proven to work by practical experiments to analyze the quality of the calibration versus the number of positions of the checkerboard used for each of the calibration procedures.",
"title": ""
}
] |
scidocsrr
|
f4311386706f2f35ed8fc154bc7adf01
|
HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment
|
[
{
"docid": "f0d3a2b2f3ca6223cab0e222da21fb54",
"text": "We present a comprehensive study of evaluation methods for unsupervised embedding techniques that obtain meaningful representations of words from text. Different evaluations result in different orderings of embedding methods, calling into question the common assumption that there is one single optimal vector representation. We present new evaluation techniques that directly compare embeddings with respect to specific queries. These methods reduce bias, provide greater insight, and allow us to solicit data-driven relevance judgments rapidly and accurately through crowdsourcing.",
"title": ""
}
] |
[
{
"docid": "ed20f85a638c4e0079bac55db1d52d01",
"text": "Cloaking is a common 'bait-and-switch' technique used to hide the true nature of a Web site by delivering blatantly different semantic content to different user segments. It is often used in search engine optimization (SEO) to obtain user traffic illegitimately for scams. In this paper, we measure and characterize the prevalence of cloaking on different search engines, how this behavior changes for targeted versus untargeted advertising and ultimately the response to site cloaking by search engine providers. Using a custom crawler, called Dagger, we track both popular search terms (e.g., as identified by Google, Alexa and Twitter) and targeted keywords (focused on pharmaceutical products) for over five months, identifying when distinct results were provided to crawlers and browsers. We further track the lifetime of cloaked search results as well as the sites they point to, demonstrating that cloakers can expect to maintain their pages in search results for several days on popular search engines and maintain the pages themselves for longer still.",
"title": ""
},
{
"docid": "0387b6a593502a9c74ee62cd8eeec886",
"text": "Recently, very deep networks, with as many as hundreds of layers, have shown great success in image classification tasks. One key component that has enabled such deep models is the use of “skip connections”, including either residual or highway connections, to alleviate the vanishing and exploding gradient problems. While these connections have been explored for speech, they have mainly been explored for feed-forward networks. Since recurrent structures, such as LSTMs, have produced state-of-the-art results on many of our Voice Search tasks, the goal of this work is to thoroughly investigate different approaches to adding depth to recurrent structures. Specifically, we experiment with novel Highway-LSTM models with bottlenecks skip connections and show that a 10 layer model can outperform a state-of-the-art 5 layer LSTM model with the same number of parameters by 2% relative WER. In addition, we experiment with Recurrent Highway layers and find these to be on par with Highway-LSTM models, when given sufficient depth.",
"title": ""
},
{
"docid": "f20c08bd1194f8589d6e56e66951a7f8",
"text": "The computational complexity grows exponentially for multi-level thresholding (MT) with the increase of the number of thresholds. Taking Kapur’s entropy as the optimized objective function, the paper puts forward the modified quick artificial bee colony algorithm (MQABC), which employs a new distance strategy for neighborhood searches. The experimental results show that MQABC can search out the optimal thresholds efficiently, precisely, and speedily, and the thresholds are very close to the results examined by exhaustive searches. In comparison to the EMO (Electro-Magnetism optimization), which is based on Kapur’s entropy, the classical ABC algorithm, and MDGWO (modified discrete grey wolf optimizer) respectively, the experimental results demonstrate that MQABC has exciting advantages over the latter three in terms of the running time in image thesholding, while maintaining the efficient segmentation quality.",
"title": ""
},
{
"docid": "de9767297368dffbdbae4073338bdb15",
"text": "An increasing number of applications rely on 3D geoinformation. In addition to 3D geometry, these applications particularly require complex semantic information. In the context of spatial data infrastructures the needed data are drawn from distributed sources and often are thematically and spatially fragmented. Straight forward joining of 3D objects would inevitably lead to geometrical inconsistencies such as cracks, permeations, or other inconsistencies. Semantic information can help to reduce the ambiguities for geometric integration, if it is coherently structured with respect to geometry. The paper discusses these problems with special focus on virtual 3D city models and the semantic data model CityGML, an emerging standard for the representation and the exchange of 3D city models based on ISO 191xx standards and GML3. Different data qualities are analyzed with respect to their semantic and spatial structure leading to the distinction of six categories regarding the spatio-semantic coherence of 3D city models. Furthermore, it is shown how spatial data with complex object descriptions support the integration process. The derived categories will help in the future development of automatic integration methods for complex 3D geodata.",
"title": ""
},
{
"docid": "67e6ec33b2afb4cf0c363d99869496bf",
"text": "This and the following two papers describe event-related potentials (ERPs) evoked by visual stimuli in 98 patients in whom electrodes were placed directly upon the cortical surface to monitor medically intractable seizures. Patients viewed pictures of faces, scrambled faces, letter-strings, number-strings, and animate and inanimate objects. This paper describes ERPs generated in striate and peristriate cortex, evoked by faces, and evoked by sinusoidal gratings, objects and letter-strings. Short-latency ERPs generated in striate and peristriate cortex were sensitive to elementary stimulus features such as luminance. Three types of face-specific ERPs were found: (i) a surface-negative potential with a peak latency of approximately 200 ms (N200) recorded from ventral occipitotemporal cortex, (ii) a lateral surface N200 recorded primarily from the middle temporal gyrus, and (iii) a late positive potential (P350) recorded from posterior ventral occipitotemporal, posterior lateral temporal and anterior ventral temporal cortex. Face-specific N200s were preceded by P150 and followed by P290 and N700 ERPs. N200 reflects initial face-specific processing, while P290, N700 and P350 reflect later face processing at or near N200 sites and in anterior ventral temporal cortex. Face-specific N200 amplitude was not significantly different in males and females, in the normal and abnormal hemisphere, or in the right and left hemisphere. However, cortical patches generating ventral face-specific N200s were larger in the right hemisphere. Other cortical patches in the same region of extrastriate cortex generated grating-sensitive N180s and object-specific or letter-string-specific N200s, suggesting that the human ventral object recognition system is segregated into functionally discrete regions.",
"title": ""
},
{
"docid": "344112b4ecf386026fd4c4714f0f3087",
"text": "This paper deals with easy programming methods of dual-arm manipulation tasks for humanoid robots. Hereby a programming by demonstration system is used in order to observe, learn and generalize tasks performed by humans. A classification for dual-arm manipulations is introduced, enabling a segmentation of tasks into adequate subtasks. Further it is shown how the generated programs are mapped on and executed by a humanoid robot.",
"title": ""
},
{
"docid": "ab3ec842ab5296e873d624732da6ee6b",
"text": "In many computer applications involving the recording and processing of personal data there is a need to allow for variations in surname spelling, caused for example by transcription errors. A number of algorithms have been developed for name matching, i.e. which attempt to identify name spelling variations, one of the best known of which is the Soundex algorithm. This paper describes a comparative analysis of a number of these algorithms and, based on an analysis of their comparative strengths and weaknesses, proposes a new and improved name matching algorithm, which we call the Phonex algorithm. The analysis takes advantage of the recent creation of a large list of “equivalent surnames”, published in the book Family History Knowledge UK [Park1992]. This list is based on data supplied by some thousands of individual genealogists, and can be presumed to be representative of British surnames and their variations over the last two or three centuries. It thus made it possible to perform what we would argue were objective tests of name matching, the results of which provide a solid basis for the analysis that we have performed, and for our claims for the merits of the new algorithm, though these are unlikely to hold fully for surnames emanating largely from other countries.",
"title": ""
},
{
"docid": "5dc25d44b0ae6ee44ee7e24832b1bc25",
"text": "The present research aims to investigate the students' perceptions levels of Edmodo and Mobile learning and to identify the real barriers of them at Taibah University in KSA. After implemented Edmodo application as an Mlearning platform, two scales were applied on the research sample, the first scale consisted of 36 statements was constructed to measure students' perceptions towards Edmodo and M-learning, and the second scale consisted of 17 items was constructed to determine the barriers of Edmodo and M-learning. The scales were distributed on 27 students during the second semester of the academic year 2013/2014. Findings indicated that students' perceptions of Edmodo and Mobile learning is in “High” level in general, and majority of students have positive perceptions towards Edmodo and Mobile learning since they think that learning using Edmodo facilitates and increases effectiveness communication of learning, and they appreciate Edmodo because it save time. Regarding the barriers of Edmodo and Mobile learning that facing several students seem like normal range, however, they were facing a problem of low mobile battery, and storing large files in their mobile phones, but they do not face any difficulty to enter the information on small screen size of mobile devices. Finally, it is suggested adding a section for M-learning in the universities to start application of M-learning and prepare a visible and audible guide for using of M-learning in teaching and learning.",
"title": ""
},
{
"docid": "79041480e35083e619bd804423459f2b",
"text": "Dynamic pricing is the dynamic adjustment of prices to consumers depending upon the value these customers attribute to a product or service. Today’s digital economy is ready for dynamic pricing; however recent research has shown that the prices will have to be adjusted in fairly sophisticated ways, based on sound mathematical models, to derive the benefits of dynamic pricing. This article attempts to survey different models that have been used in dynamic pricing. We first motivate dynamic pricing and present underlying concepts, with several examples, and explain conditions under which dynamic pricing is likely to succeed. We then bring out the role of models in computing dynamic prices. The models surveyed include inventory-based models, data-driven models, auctions, and machine learning. We present a detailed example of an e-business market to show the use of reinforcement learning in dynamic pricing.",
"title": ""
},
{
"docid": "9e2516a141cb6e46cfa6d27e723a7ba9",
"text": "In this paper, we present the method we developed when participating to the e-Risk pilot task. We use machine learning in order to solve the problem of early detection of depressive users in social media relying on various features that we detail in this paper. We submitted 4 models which differences are also detailed in this paper. Best results were obtained when using a combination of lexical and statistical features.",
"title": ""
},
{
"docid": "5a38a2d349838b32bc5c41d362a220ac",
"text": "This article considers the challenges associated with completing risk assessments in countering violent extremism. In particular, it is concerned with risk assessment of those who come to the attention of government and nongovernment organizations as being potentially on a trajectory toward terrorism and where there is an obligation to consider the potential future risk that they may pose. Risk assessment in this context is fraught with difficulty, primarily due to the variable nature of terrorism, the low base-rate problem, and the dearth of strong evidence on relevant risk and resilience factors. Statistically, this will lead to poor predictive value. Ethically, it can lead to the labeling of an individual who is not on a trajectory toward violence as being \"at risk\" of engaging in terrorism and the imposing of unnecessary risk management actions. The article argues that actuarial approaches to risk assessment in this context cannot work. However, it further argues that approaches that help assessors to process and synthesize information in a structured way are of value and are in line with good practice in the broader field of violence risk assessment. (PsycINFO Database Record",
"title": ""
},
{
"docid": "bb685e028e4f1005b7fe9da01f279784",
"text": "Although there are few efficient algorithms in the literature for scientific workflow tasks allocation and scheduling for heterogeneous resources such as those proposed in grid computing context, they usually require a bounded number of computer resources that cannot be applied in Cloud computing environment. Indeed, unlike grid, elastic computing, such asAmazon's EC2, allows users to allocate and release compute resources on-demand and pay only for what they use. Therefore, it is reasonable to assume that the number of resources is infinite. This feature of Clouds has been called âillusion of infiniteresourcesâ. However, despite the proven benefits of using Cloud to run scientific workflows, users lack guidance for choosing between multiple offering while taking into account several objectives which are often conflicting. On the other side, the workflow tasks allocation and scheduling have been shown to be NP-complete problems. Thus, it is convenient to use heuristic rather than deterministic algorithm. The objective of this paper is to design an allocation strategy for Cloud computing platform. More precisely, we propose three complementary bi-criteria approaches for scheduling workflows on distributed Cloud resources, taking into account the overall execution time and the cost incurred by using a set of resources.",
"title": ""
},
{
"docid": "4ac6eb0f8db4d2c02b877c3d1c6892e0",
"text": "Safety and efficient operation are imperative factors t offshore production sites and a main concern to all Oil & Gas companies. A promising solution to improve both safety and efficiency is to increase the level of automation on the platforms by introducing intelligent robotic systems. Robots can execute a wide variety of tasks in offshore environments, incl uding monitoring and inspection, diagnosis and maintenance, proc ess production intervention, and cargo transport operations. In particular, considering the distance of offshore platfor ms from the Brazilian coast, such technology has great potential to increase safety by decreasing the number of onboard personnel , simp ify logistics, and reduce operating costs of Brazili n facilities. The use of robots can also allow proactive int grity management and increase frequency and efficiency of platform inspection. DORIS is a research project which endeavors to design and implement a mobile robot for remote supervision, diagnosi s, and data acquisition on offshore facilities. The propos ed ystem is composed of a rail-guided mobile robot capable of carrying different sensors through the inspected environment. The robot can also analyze sensor data and identify anomalies, such a intruders, abandoned objects, smoke, fire, and liquid lea kage. The system is able to read valves and make machine ry diagnosis as well. To prove the viability of the proposed system, an initial prototype is developed using a Roomba robot with several onboard sensors and preliminary tests have been performed in a real environment similar to an offshore platform. The te sts show that the robot is capable of indicating the presence or absence o f objects in a video stream and mapping the local area wit h laser sensor data during motion. A second prototype has been built to test the DORIS mechanical design. This prototype is us ed to test concepts related to motion on a rail with straight, cu rved, horizontal, and vertical sections. Initial results support the proposed mechanical concept and its functionalities. Introduction During the last decade, several Oil & Gas companies, re sea ch groups, and academic communities have shown an increas ed interest in the use of robotic systems for operation o f offshore facilities. Recent studies project a substant ial decrease in the level of human operation and an increase in automation used o n future offshore oil fields (Skourup and Pretlove, 2009). Today, robotic systems are used mainly for subsea tasks, s uch a mapping the seabed and performing inspection tasks on underwater equipment, risers, or pipelines using Remotely O perated Vehicles (ROVs) or Autonomous Underwater Vehicle s (AUVs). Topside operations, on the other hand, have not yet ado pted robotized automation as a solution to inspection and operation tasks. From (2010) points out the potential increase in efficiency and productivity with robot operators rather than humans, give n that robots work 24 hours per day and 7 days per week, ar less prone to errors, and are more reliable. Another hi ghlighted point is the improvement Health, Safety, and Environment ( HSE) conditions, as robots can replace humans in tasks perf orm d in unhealthy, hazardous, or confined areas. In the specific Brazilian case, the Oil & Gas industry is growing at a high pace, mainly due to the recent discove ries of big oil fields in the pre-salt layer off the Brazilian coast. These oil reservoirs are located farther than 300 km f ro the shore and at depths of 5000 to 7000 km. These factors, especially the la rg distances, motivate the development of an offshore produ cti n system with a high degree of automation based on advanced roboti cs systems.",
"title": ""
},
{
"docid": "8a905d0abdc1a6a8daeb44137fa980ee",
"text": "In the mobile game industry, Free-to-Play games are dominantly released, and therefore player retention and purchases have become important issues. In this paper, we propose a game player model for predicting when players will leave a game. Firstly, we define player churn in the game and extract features that contain the properties of the player churn from the player logs. And then we tackle the problem of imbalanced datasets. Finally, we exploit classification algorithms from machine learning and evaluate the performance of the proposed prediction model using cross-validation. Experimental results show that the proposed model has high accuracy enough to predict churn for real-world application.",
"title": ""
},
{
"docid": "e3b92d76bb139d0601c85416e8afaca4",
"text": "Conventional supervised object recognition methods have been investigated for many years. Despite their successes, there are still two suffering limitations: (1) various information of an object is represented by artificial features only derived from RGB images, (2) lots of manually labeled data is required by supervised learning. To address those limitations, we propose a new semi-supervised learning framework based on RGB and depth (RGB-D) images to improve object recognition. In particular, our framework has two modules: (1) RGB and depth images are represented by convolutional-recursive neural networks to construct high level features, respectively, (2) co-training is exploited to make full use of unlabeled RGB-D instances due to the existing two independent views. Experiments on the standard RGB-D object dataset demonstrate that our method can compete against with other state-of-the-art methods with only 20% labeled data.",
"title": ""
},
{
"docid": "521d3777ae16b72f2b2fd931d1a7f780",
"text": "ShiViz is a new distributed system debugging visualization tool.",
"title": ""
},
{
"docid": "295809398866d81cab85c44b145df56d",
"text": "This paper discusses the “Building-In Reliability” (BIR) approach to process development, particularly for technologies integrating Bipolar, CMOS, and DMOS devices (so-called BCD technologies). Examples of BIR reliability assessments include gate oxide integrity (GOI) through Time-Dependent Dielectric Breakdown (TDDB) studies and degradation of laterally diffused MOS (LDMOS) devices by Hot-Carrier Injection (HCI) stress. TDDB allows calculation of gate oxide failure rates based on operating voltage waveforms and temperature. HCI causes increases in LDMOS resistance (Rdson), which decreases efficiency in power applications.",
"title": ""
},
{
"docid": "7ca2d093da7646ff0d69fb3ba9d675ae",
"text": "Advancements in deep learning over the years have attracted research into how deep artificial neural networks can be used in robotic systems. It is on this basis that the following research survey will present a discussion of the applications, gains, and obstacles to deep learning in comparison to physical robotic systems while using modern research as examples. The research survey will present a summarization of the current research with specific focus on the gains and obstacles in comparison to robotics. This will be followed by a primer on discussing how notable deep learning structures can be used in robotics with relevant examples. The next section will show the practical considerations robotics researchers desire to use in regard to deep learning neural networks. Finally, the research survey will show the shortcomings and solutions to mitigate them in addition to discussion of the future trends. The intention of this research is to show how recent advancements in the broader robotics field can inspire additional research in applying deep learning in robotics.",
"title": ""
},
{
"docid": "0b973f37e2d9c3d7f427b939db233f12",
"text": "Artificial intelligence (AI) generally and machine learning (ML) specifically demonstrate impressive practical success in many different application domains, e.g. in autonomous driving, speech recognition, or recommender systems. Deep learning approaches, trained on extremely large data sets or using reinforcement learning methods have even exceeded human performance in visual tasks, particularly on playing games such as Atari, or mastering the game of Go. Even in the medical domain there are remarkable results. However, the central problem of such models is that they are regarded as black-box models and even if we understand the underlying mathematical principles of such models they lack an explicit declarative knowledge representation, hence have difficulty in generating the underlying explanatory structures. This calls for systems enabling to make decisions transparent, understandable and explainable. A huge motivation for our approach are rising legal and privacy aspects. The new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May 25th 2018, will make black-box approaches difficult to use in business. This does not imply a ban on automatic learning approaches or an obligation to explain everything all the time, however, there must be a possibility to make the results re-traceable on demand. This is beneficial, e.g. for general understanding, for teaching, for learning, for research, and it can be helpful in court. In this paper we outline some of our research topics in the context of the relatively new area of explainable-AI with a focus on the application in medicine, which is a very special domain. This is due to the fact that medical professionals are working mostly with distributed heterogeneous and complex sources of data. In this paper we concentrate on three sources: images, *omics data and text. We argue that research in explainable-AI would generally help to facilitate the implementation of AI/ML in the medical domain, and specifically help to facilitate transparency and trust.",
"title": ""
},
{
"docid": "e2355a14cc9a648eec65ffa330cc2fa5",
"text": "A new family of single-stage super Class-AB operational transconductance amplifiers (OTAs) suitable for low-voltage operation and low power consumption is presented. Three novel topologies are proposed featuring simplicity and compactness. They are based on the combination of adaptive biasing techniques for the differential input stage and nonlinear current mirrors for the active load that provide additional dynamic current boosting. The OTAs have been fabricated in a standard 0.5-mum CMOS process. Experimental results show a greatly improved slew rate by factors 30-60 and gain-bandwidth product by factors 11.5-17 when compared to a classical Class-A OTA. The circuits are operated at plusmn1-V supply voltage with only 10 muA of bias current",
"title": ""
}
] |
scidocsrr
|
5dcc9e3140b9452838fa4038ed821a47
|
A novel diode-clamped CSTBT with ultra-low on-state voltage and saturation current
|
[
{
"docid": "6bdc9ba3cd272018795108fe5004c060",
"text": "Electrical characteristics of the fabricated 600V class CSTBT™ with a Light Punch Through (LPT) structure on an advanced thin wafer technology are presented for the first time. The electrical characteristics of LPT-CSTBT are superior to the conventional Punch Through type (PT) one, especially in low current density regions because of the inherent lower built-in potential. Furthermore, we also have evaluated the effects of the mechanical stress on the device characteristics after soldering, utilizing a novel evaluation method with a very small size sub-chip layout. The results validate the proposed tool is useful to examine the influence of the mechanical stress on the electrical characteristics.",
"title": ""
},
{
"docid": "28ff541a446bfb7783d1fae2492df734",
"text": "Using an advanced thin wafer technology, we have successfully fabricated the next generation 650V class IGBT with an improved SOA and maintaining the narrow distribution of the electrical characteristics for industrial applications. The applied techniques were the finer pattern transistor cell, the thin wafer process and the optimized back side doping concentration profiles. With the well organized back-side wafer process, the practically large chip has achieved without any sacrifice of the production yield. As a results, VCEsat-Eoff trade-off relationship and an Energy of Short Circuit by active Area (ESC/A) are improved in comparison with the conventional Punch Through (PT) structure.",
"title": ""
},
{
"docid": "432149654abdfdabb9147a830f50196d",
"text": "In this paper, an advanced High Voltage (HV) IGBT technology, which is focused on low loss and is the ultimate device concept for HV IGBT, is presented. CSTBTTM technology utilizing “ULSI technology” and “Light Punch-Through (LPT) II technology” (i.e. narrow Wide Cell Pitch LPT(II)-CSTBT(III)) for the first time demonstrates breaking through the limitation of HV IGBT's characteristics with voltage ratings ranging from 2500 V up to 6500 V. The improved significant trade-off characteristic between on-state voltage (VCE(sat)) and turn-off loss (EOFF) is achieved by means of a “narrow Wide Cell Pitch CSTBT(III) cell”. In addition, this device achieves a wide operating junction temperature (@218 ∼ 448K) and excellent short circuit behavior with the new cell and vertical designs. The LPT(II) concept is utilized for ensuring controllable IGBT characteristics and achieving a thin N− drift layer. Our results cover design of the Wide Cell Pitch LPT(II)-CSTBT(III) technology and demonstrate high total performance with a great improvement potential.",
"title": ""
}
] |
[
{
"docid": "2793e8eb1410b2379a8a416f0560df0a",
"text": "Alzheimer’s disease (AD) transgenic mice have been used as a standard AD model for basic mechanistic studies and drug discovery. These mouse models showed symbolic AD pathologies including β-amyloid (Aβ) plaques, gliosis and memory deficits but failed to fully recapitulate AD pathogenic cascades including robust phospho tau (p-tau) accumulation, clear neurofibrillary tangles (NFTs) and neurodegeneration, solely driven by familial AD (FAD) mutation(s). Recent advances in human stem cell and three-dimensional (3D) culture technologies made it possible to generate novel 3D neural cell culture models that recapitulate AD pathologies including robust Aβ deposition and Aβ-driven NFT-like tau pathology. These new 3D human cell culture models of AD hold a promise for a novel platform that can be used for mechanism studies in human brain-like environment and high-throughput drug screening (HTS). In this review, we will summarize the current progress in recapitulating AD pathogenic cascades in human neural cell culture models using AD patient-derived induced pluripotent stem cells (iPSCs) or genetically modified human stem cell lines. We will also explain how new 3D culture technologies were applied to accelerate Aβ and p-tau pathologies in human neural cell cultures, as compared the standard two-dimensional (2D) culture conditions. Finally, we will discuss a potential impact of the human 3D human neural cell culture models on the AD drug-development process. These revolutionary 3D culture models of AD will contribute to accelerate the discovery of novel AD drugs.",
"title": ""
},
{
"docid": "3c9857605589542835fdcc3b5d54e2bd",
"text": "Theory, design, realization and measurements of an X-band isoflux circularly polarized antenna for LEO satellite platforms are presented. The antenna is based on a metasurface composed by a dense texture of sub-wavelength metal patches on a grounded dielectric slab, excited by a surface wave generated by a coplanar feeder. The antenna is extremely flat (1.57 mm) and light (less than 1 Kg) and represents a competitive solution for space-to-ground data link applications.",
"title": ""
},
{
"docid": "a1530b82b61fc6fc8eceb083fc394e9b",
"text": "The performance of any algorithm will largely depend on the setting of its algorithm-dependent parameters. The optimal setting should allow the algorithm to achieve the best performance for solving a range of optimization problems. However, such parameter tuning itself is a tough optimization problem. In this paper, we present a framework for self-tuning algorithms so that an algorithm to be tuned can be used to tune the algorithm itself. Using the firefly algorithm as an example, we show that this framework works well. It is also found that different parameters may have different sensitivities and thus require different degrees of tuning. Parameters with high sensitivities require fine-tuning to achieve optimality.",
"title": ""
},
{
"docid": "1dbdff8b4c1d195482b1650986a944ad",
"text": "Stevioside is a natural sweetener extracted from leaves of Stevia rebaudiana Bertoni, which is commercially produced by conventional (chemical/physical) processes. This article gives an overview of the stevioside structure, various analysis technique, new technologies required and the advances achieved in recent years. An enzymatic process is established, by which the maximum efficacy and benefit of the process can be achieved. The efficiency of the enzymatic process is quite comparable to that of other physical and chemical methods. Finally, we believe that in the future, the enzyme-based extraction will ensure more cost-effective availability of stevioside, thus assisting in the development of more food-based applications.",
"title": ""
},
{
"docid": "8e449b97776006cab804aacc1773770d",
"text": "Permanent-magnet (PM) synchronous motor (PMSM) with rare-earth PMs is most popular for automotive applications because of its excellent performance such as high power density, high torque density, and high efficiency. However, the rare-earth PMs have problems such as high cost and limited supply of rare-earth material. Therefore, the electric motors with less or no rare-earth PMs are required in electric vehicle (EV) and hybrid electric vehicle (HEV) applications. This paper proposes and examines a PM-assisted synchronous reluctance motor (PMASynRM) with ferrite magnets that has competitive power density and efficiency of the rare-earth PMSM employed in HEV. The PMASynRM for automotive applications is designed taking into account the irreversible demagnetization of ferrite magnets and the mechanical strength. The prototype PMASynRM has been manufactured, and several performances such as torque, output power, losses, and efficiency are evaluated. Furthermore, the performances of the high-power PMASynRM are estimated based on the experimental results of the prototype PMASynRM, and the possibility of the application of the proposed PMASynRM to EV and HEV is discussed.",
"title": ""
},
{
"docid": "6cdd6ff86c085cad630ae278ca964ecd",
"text": "Parametric statistical models of continuous or discrete valued data are often not properly normalized, that is, they do not integrate or sum to unity. The normalization is essential for maximum likelihood estimation. While in principle, models can always be normalized by dividing them by their integral or sum (their partition function), this can in practice be extremely difficult. We have been developing methods for the estimation of unnormalized models which do not approximate the partition function using numerical integration. We review these methods, score matching and noise-contrastive estimation, point out extensions and connections both between them and methods by other authors, and discuss their pros and cons.",
"title": ""
},
{
"docid": "de1ec3df1fa76e5a419ac8506cd63286",
"text": "It is hard to estimate optical flow given a realworld video sequence with camera shake and other motion blur. In this paper, we first investigate the blur parameterization for video footage using near linear motion elements. We then combine a commercial 3D pose sensor with an RGB camera, in order to film video footage of interest together with the camera motion. We illustrates that this additional camera motion/trajectory channel can be embedded into a hybrid framework by interleaving an iterative blind deconvolution and warping based optical flow scheme. Our method yields improved accuracy within three other state-of-the-art baselines given our proposed ground truth blurry sequences; and several other realworld sequences filmed by our imaging system.",
"title": ""
},
{
"docid": "73e4f93a46d8d66599aaaeaf71c8efe2",
"text": "The galvanometer-based scanners (GS) are oscillatory optical systems utilized in high-end biomedical technologies. From a control point-of-view the GSs are mechatronic systems (mainly positioning servo-systems) built usually in a close loop structure and controlled by different control algorithms. The paper presents a Model based Predictive Control (MPC) solution for the mobile equipment (moving magnet and galvomirror) of a GS. The development of a high-performance control solution is based to a basic closed loop GS which consists of a PD-L1 controller and a servomotor. The mathematical model (MM) and the parameters of the basic construction are identified using a theoretical approach followed by an experimental identification. The equipment is used in our laboratory for better dynamical performances for biomedical imaging systems. The control solutions proposed are supported by simulations carried out in Matlab/Simulink.",
"title": ""
},
{
"docid": "a913255762a5ced0fe00d08c599333d9",
"text": "The electroencephalogram (EEG) consists of an underlying background process with superimposed transient nonstationarities such as epileptic spikes (ESs). The detection of ESs in the EEG is of particular importance in the diagnosis of epilepsy. In this paper a new approach for detecting ESs in EEG recordings is presented. It is based on a time-varying autoregressive model (TVAR) that makes use of the nonstationarities of the EEG signal. The autoregressive (AR) parameters are estimated via Kalman filtering (KF). In our method, the EEG signal is first preprocessed to accentuate ESs and attenuate background activity, and then passed through a thresholding function to determine ES locations. The proposed method is evaluated using simulated signals as well as real inter-ictal EEGs",
"title": ""
},
{
"docid": "ecd6857a5b87b241a4d422cc49f5c116",
"text": "Cloud offloading is considered a promising approach to energy conservation and storage/computation enhancement for resource limited mobile devices. In this paper, we present a Lyapunov optimization based scheme for cloud offloading scheduling, as well as download scheduling for cloud execution output, for multiple applications running in a mobile device with a multi-core CPU. We derive an online algorithm and prove performance bounds for the proposed algorithm with respect to average power consumption and average queue length, which is indicative of delay, and reveal the fundamental trade-off between the two optimization goals.",
"title": ""
},
{
"docid": "c824c8bb8fd9b0b3f0f89df24e8f53d0",
"text": "Ovarian cysts are an extremely common gynecological problem in adolescent. Majority of ovarian cysts are benign with few cases being malignant. Ovarian serous cystadenoma are rare in children. A 14-year-old presented with abdominal pain and severe abdominal distention. She underwent laparotomy and after surgical removal, the mass was found to be ovarian serous cystadenoma on histology. In conclusions, germ cell tumors the most important causes for the giant ovarian masses in children. Epithelial tumors should not be forgotten in the differential diagnosis. Keyword: Adolescent; Ovarian Cysts/diagnosis*; Cystadenoma, Serous/surgery; Ovarian Neoplasms/surgery; Ovarian cystadenoma",
"title": ""
},
{
"docid": "ce08b02ae03c8496e051c3443874de8f",
"text": "The goal was to determine the utility and accuracy of automated analysis of single-lead electrocardiogram (ECG) data using two algorithms, cardiopulmonary coupling (CPC), and cyclic variation of heart rate (CVHR) to identify sleep apnea (SA). The CPC-CVHR algorithms were applied to identify SA by analyzing ECG from diagnostic polysomnography (PSG) from 47 subjects. The studies were rescored according to updated AASM scoring rules, both manually by a certified technologist and using an FDA-approved automated scoring software, Somnolyzer (Philips Inc., Monroeville, PA). The CPC+CVHR output of Sleep Quality Index (SQI), Sleep Apnea Indicator (SAI), elevated low frequency coupling broadband (eLFCBB) and elevated low frequency coupling narrow-band (eLFCNB) were compared to the manual and automated scoring of apnea hypopnea index (AHI). A high degree of agreement was noted between the CPC-CVHR against both the manually rescored AHI and the computerized scored AHI to identify patients with moderate and severe sleep apnea (AHI > 15). The combined CPC+CVHR algorithms, when compared to the manually scored PSG output presents sensitivity 89%, specificity 79%, agreement 85%, PPV (positive predictive value) 0.86 and NPV (negative predictive value) 0.83, and substantial Kappa 0.70. Comparing the output of the automated scoring software to the manual scoring demonstrated sensitivity 93%, specificity 79%, agreement 87%, PPV 0.87, NPV 0.88, and substantial Kappa 0.74. The CPC+CVHR technology performed as accurately as the automated scoring software to identify patients with moderate to severe SA, demonstrating a clinically powerful tool that can be implemented in various clinical settings to identify patients at risk for SA. NCT01234077.",
"title": ""
},
{
"docid": "c733ee2715e69a674f3e8db46ca8c5b3",
"text": "Authentication is of paramount importance for all modern networked applications. The username/password paradigm is ubiquitous. This paradigm suffices for many applications that require BLOCKIN BLOCKIN a BLOCKIN BLOCKIN relatively BLOCKIN BLOCKIN low BLOCKIN BLOCKIN level BLOCKIN BLOCKIN of BLOCKIN BLOCKIN assurance BLOCKIN BLOCKIN about BLOCKIN BLOCKIN the BLOCKIN BLOCKIN identity BLOCKIN BLOCKIN of BLOCKIN BLOCKIN the BLOCKIN BLOCKIN end BLOCKIN BLOCKIN user, BLOCKIN BLOCKIN but BLOCKIN BLOCKIN it BLOCKIN BLOCKIN quickly BLOCKIN BLOCKIN breaks down when a stronger assertion of the user's identity is required. Traditionally, this is where two-‐ or multi-‐factor authentication comes in, providing a higher level of assurance. There is a multitude of BLOCKIN BLOCKIN two-‐factor BLOCKIN BLOCKIN authentication BLOCKIN BLOCKIN solutions BLOCKIN BLOCKIN available, BLOCKIN BLOCKIN but BLOCKIN BLOCKIN we BLOCKIN BLOCKIN feel BLOCKIN BLOCKIN that BLOCKIN BLOCKIN many BLOCKIN BLOCKIN solutions BLOCKIN BLOCKIN do BLOCKIN BLOCKIN not BLOCKIN BLOCKIN meet BLOCKIN BLOCKIN the needs of our community. They are invariably expensive, difficult to roll out in heterogeneous user groups (like student populations), often closed source and closed technology and have usability problems that make them hard to use. In this paper we will give an overview of the two-‐factor au-‐ thentication landscape and address the issues of closed versus open solutions. We will introduce a novel open standards-‐based authentication technology that we have developed and released in open source. We will then provide a classification of two-‐factor authentication technologies, and we will finish with an overview of future work.",
"title": ""
},
{
"docid": "0e19123e438f39c4404d4bd486348247",
"text": "Boundary and edge cues are highly beneficial in improving a wide variety of vision tasks such as semantic segmentation, object recognition, stereo, and object proposal generation. Recently, the problem of edge detection has been revisited and significant progress has been made with deep learning. While classical edge detection is a challenging binary problem in itself, the category-aware semantic edge detection by nature is an even more challenging multi-label problem. We model the problem such that each edge pixel can be associated with more than one class as they appear in contours or junctions belonging to two or more semantic classes. To this end, we propose a novel end-to-end deep semantic edge learning architecture based on ResNet and a new skip-layer architecture where category-wise edge activations at the top convolution layer share and are fused with the same set of bottom layer features. We then propose a multi-label loss function to supervise the fused activations. We show that our proposed architecture benefits this problem with better performance, and we outperform the current state-of-the-art semantic edge detection methods by a large margin on standard data sets such as SBD and Cityscapes.",
"title": ""
},
{
"docid": "5365f6f5174c3d211ea562c8a7fa0aab",
"text": "Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)",
"title": ""
},
{
"docid": "d529d1052fce64ae05fbc64d2b0450ab",
"text": "Today, many industrial companies must face problems raised by maintenance. In particular, the anomaly detection problem is probably one of the most challenging. In this paper we focus on the railway maintenance task and propose to automatically detect anomalies in order to predict in advance potential failures. We first address the problem of characterizing normal behavior. In order to extract interesting patterns, we have developed a method to take into account the contextual criteria associated to railway data (itinerary, weather conditions, etc.). We then measure the compliance of new data, according to extracted knowledge, and provide information about the seriousness and the exact localization of a detected anomaly. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7bf5aaa12c9525909f39dc8af8774927",
"text": "Certain deterministic non-linear systems may show chaotic behaviour. Time series derived from such systems seem stochastic when analyzed with linear techniques. However, uncovering the deterministic structure is important because it allows constructing more realistic and better models and thus improved predictive capabilities. This paper provides a review of two main key features of chaotic systems, the dimensions of their strange attractors and the Lyapunov exponents. The emphasis is on state space reconstruction techniques that are used to estimate these properties, given scalar observations. Data generated from equations known to display chaotic behaviour are used for illustration. A compilation of applications to real data from widely di erent elds is given. If chaos is found to be present, one may proceed to build non-linear models, which is the topic of the second paper in this series.",
"title": ""
},
{
"docid": "26029eb824fc5ad409f53b15bfa0dc15",
"text": "Detecting contradicting statements is a fundamental and challenging natural language processing and machine learning task, with numerous applications in information extraction and retrieval. For instance, contradictions need to be recognized by question answering systems or multi-document summarization systems. In terms of machine learning, it requires the ability, through supervised learning, to accurately estimate and capture the subtle differences between contradictions and for instance, paraphrases. In terms of natural language processing, it demands a pipeline approach with distinct phases in order to extract as much knowledge as possible from sentences. Previous state-of-the-art systems rely often on semantics and alignment relations. In this work, I move away from the commonly setup used in this domain, and address the problem of detecting contradictions as a classification task. I argue that for such classification, one can heavily rely on features based on those used for detecting paraphrases and recognizing textual entailment, alongside with numeric and string based features. This M.Sc. dissertation provides a system capable of detecting contradictions from a pair of affirmations published across newspapers with both a F1-score and Accuracy of 71%. Furthermore, this M.Sc. dissertation provides an assessment of what are the most informative features for detecting contradictions and paraphrases and infer if exists a correlation between contradiction detection and paraphrase identification.",
"title": ""
},
{
"docid": "125148cad2e3aef1cf7cb1fb9698f305",
"text": "BACKGROUND\nDental decay is the most common childhood disease worldwide and most of the decay remains untreated. In the Philippines caries levels are among the highest in the South East Asian region. Elementary school children suffer from high prevalence of stunting and underweight.The present study aimed to investigate the association between untreated dental decay and Body Mass Index (BMI) among 12-year-old Filipino children.\n\n\nMETHODS\nData collection was part of the National Oral Health Survey, a representative cross-sectional study of 1951 11-13-year-old school children using a modified, stratified cluster sampling design based on population classifications of the Philippine National Statistics Office. Caries was scored according to WHO criteria (1997) and odontogenic infections using the PUFA index. Anthropometric measures were performed by trained nurses. Some socio-economic determinants were included as potential confounding factors.\n\n\nRESULTS\nThe overall prevalence of caries (DMFT + dmft > 0) was 82.3% (95%CI; 80.6%-84.0%). The overall prevalence of odontogenic infections due to caries (PUFA + pufa > 0) was 55.7% (95% CI; 53.5%-57.9%) The BMI of 27.1% (95%CI; 25.1%-29.1%) of children was below normal, 1% (95%CI; 0.5%-1.4%) had a BMI above normal. The regression coefficient between BMI and caries was highly significant (p < 0.001). Children with odontogenic infections (PUFA + pufa > 0) as compared to those without odontogenic infections had an increased risk of a below normal BMI (OR: 1.47; 95% CI: 1.19-1.80).\n\n\nCONCLUSIONS\nThis is the first-ever representative survey showing a significant association between caries and BMI and particularly between odontogenic infections and below normal BMI. An expanded model of hypothesised associations is presented that includes progressed forms of dental decay as a significant, yet largely neglected determinant of poor child development.",
"title": ""
},
{
"docid": "1de3a70567e68eebfebe2bc797f58e08",
"text": "This article provides a comprehensive description of FastSLAM, a new family of algorithms for the simultaneous localization and mapping problem, which specifically address hard data association problems. The algorithm uses a particle filter for sampling robot paths, and extended Kalman filters for representing maps acquired by the vehicle. This article presents two variants of this algorithm, the original algorithm along with a more recent variant that provides improved performance in certain operating regimes. In addition to a mathematical derivation of the new algorithm, we present a proof of convergence and experimental results on its performance on real-world data.",
"title": ""
}
] |
scidocsrr
|
8c79503535be35d2633d85c0a0da95f1
|
Blockchains and Bitcoin: Regulatory responses to cryptocurrencies
|
[
{
"docid": "45b1cb6c9393128c9a9dcf9dbeb50778",
"text": "Bitcoin, a distributed, cryptographic, digital currency, gained a lot of media attention for being an anonymous e-cash system. But as all transactions in the network are stored publicly in the blockchain, allowing anyone to inspect and analyze them, the system does not provide real anonymity but pseudonymity. There have already been studies showing the possibility to deanonymize bitcoin users based on the transaction graph and publicly available data. Furthermore, users could be tracked by bitcoin exchanges or shops, where they have to provide personal information that can then be linked to their bitcoin addresses. Special bitcoin mixing services claim to obfuscate the origin of transactions and thereby increase the anonymity of its users. In this paper we evaluate three of these services – Bitcoin Fog, BitLaundry, and the Send Shared functionality of Blockchain.info – by analyzing the transaction graph. While Bitcoin Fog and Blockchain.info successfully mix our transaction, we are able to find a direct relation between the input and output transactions in the graph of BitLaundry.",
"title": ""
}
] |
[
{
"docid": "8cd73397c9a79646ac1b2acac44dd8a7",
"text": "Liquid micro-jet array impingement cooling of a power conversion module with 12 power switching devices (six insulated gate bipolar transistors and six diodes) is investigated. The 1200-V/150-A module converts dc input power to variable frequency, variable voltage three-phase ac output to drive a 50HP three-phase induction motor. The silicon devices are attached to a packaging layer [direct bonded copper (DBC)], which in turn is soldered to a metal base plate. DI water micro-jet array impinges on the base plate of the module targeted at the footprint area of the devices. Although the high heat flux cooling capability of liquid impingement is a well-established finding, the impact of its practical implementation in power systems has never been addressed. This paper presents the first one-to-one comparison of liquid micro-jet array impingement cooling (JAIC) with the traditional methods, such as air-cooling over finned heat sink or liquid flow in multi-pass cold plate. Results show that compared to the conventional cooling methods, JAIC can significantly enhance the module output power. If the output power is maintained constant, the device temperature can be reduced drastically by JAIC. Furthermore, jet impingement provides uniform cooling for multiple devices placed over a large area, thereby reducing non-uniformity of temperature among the devices. The reduction in device temperature, both its absolute value and the non-uniformity, implies multi-fold increase in module reliability. The results thus illustrate the importance of efficient thermal management technique for compact and reliable power conversion application",
"title": ""
},
{
"docid": "44bd9d0b66cb8d4f2c4590b4cb724765",
"text": "AIM\nThis paper is a description of inductive and deductive content analysis.\n\n\nBACKGROUND\nContent analysis is a method that may be used with either qualitative or quantitative data and in an inductive or deductive way. Qualitative content analysis is commonly used in nursing studies but little has been published on the analysis process and many research books generally only provide a short description of this method.\n\n\nDISCUSSION\nWhen using content analysis, the aim was to build a model to describe the phenomenon in a conceptual form. Both inductive and deductive analysis processes are represented as three main phases: preparation, organizing and reporting. The preparation phase is similar in both approaches. The concepts are derived from the data in inductive content analysis. Deductive content analysis is used when the structure of analysis is operationalized on the basis of previous knowledge.\n\n\nCONCLUSION\nInductive content analysis is used in cases where there are no previous studies dealing with the phenomenon or when it is fragmented. A deductive approach is useful if the general aim was to test a previous theory in a different situation or to compare categories at different time periods.",
"title": ""
},
{
"docid": "af2a1083436450b9147eb7b51be5c761",
"text": "Over the past century, various value models have been proposed. To determine which value model best predicts prosocial behavior, mental health, and pro-environmental behavior, we subjected seven value models to a hierarchical regression analysis. A sample of University students (N = 271) completed the Portrait Value Questionnaire (Schwartz et al., 2012), the Basic Value Survey (Gouveia et al., 2008), and the Social Value Orientation scale (Van Lange et al., 1997). Additionally, they completed the Values Survey Module (Hofstede and Minkov, 2013), Inglehart's (1977) materialism-postmaterialism items, the Study of Values, fourth edition (Allport et al., 1960; Kopelman et al., 2003), and the Rokeach (1973) Value Survey. However, because the reliability of the latter measures was low, only the PVQ-RR, the BVS, and the SVO where entered into our analysis. Our results provide empirical evidence that the PVQ-RR is the strongest predictor of all three outcome variables, explaining variance above and beyond the other two instruments in almost all cases. The BVS significantly predicted prosocial and pro-environmental behavior, while the SVO only explained variance in pro-environmental behavior.",
"title": ""
},
{
"docid": "c1bfef951e9775f6ffc949c5110e1bd1",
"text": "In the interest of more systematically documenting the early signs of autism, and of testing specific hypotheses regarding their underlying neurodevelopmental substrates, we have initiated a longitudinal study of high-risk infants, all of whom have an older sibling diagnosed with an autistic spectrum disorder. Our sample currently includes 150 infant siblings, including 65 who have been followed to age 24 months, who are the focus of this paper. We have also followed a comparison group of low-risk infants. Our measures include a novel observational scale (the first, to our knowledge, that is designed to assess autism-specific behavior in infants), a computerized visual orienting task, and standardized measures of temperament, cognitive and language development. Our preliminary results indicate that by 12 months of age, siblings who are later diagnosed with autism may be distinguished from other siblings and low-risk controls on the basis of: (1) several specific behavioral markers, including atypicalities in eye contact, visual tracking, disengagement of visual attention, orienting to name, imitation, social smiling, reactivity, social interest and affect, and sensory-oriented behaviors; (2) prolonged latency to disengage visual attention; (3) a characteristic pattern of early temperament, with marked passivity and decreased activity level at 6 months, followed by extreme distress reactions, a tendency to fixate on particular objects in the environment, and decreased expression of positive affect by 12 months; and (4) delayed expressive and receptive language. We discuss these findings in the context of various neural networks thought to underlie neurodevelopmental abnormalities in autism, including poor visual orienting. Over time, as we are able to prospectively study larger numbers and to examine interrelationships among both early-developing behaviors and biological indices of interest, we hope this work will advance current understanding of the neurodevelopmental origins of autism.",
"title": ""
},
{
"docid": "f28d48c838af52caca200e69ebe4cc73",
"text": "This paper shows a new class-<formula formulatype=\"inline\"><tex Notation=\"TeX\">$E$</tex></formula> amplifier topology with the objective to increase the nominal class-<formula formulatype=\"inline\"><tex Notation=\"TeX\">$E$</tex></formula> output power for a given voltage and current stress on the power transistor. To obtain that result, a parallel LC resonator is added to the load network, tuned to the second harmonic of the switching frequency. A class-<formula formulatype=\"inline\"><tex Notation=\"TeX\">$E$</tex></formula> power amplifier is obtained whose transistor-voltage waveform peak value is 81% of the peak value of the voltage of a nominal class- <formula formulatype=\"inline\"><tex Notation=\"TeX\">$E$</tex></formula> amplifier using the same dc supply voltage. In this amplifier, the peak voltage across the transistor is 3.0 times the dc supply voltage, instead of the 3.6 times associated with nominal class-<formula formulatype=\"inline\"><tex Notation=\"TeX\">$E$</tex></formula> amplifiers. A normalized design is presented, and the behavior of the circuit is analyzed with simulation showing that the ratio of output power versus transistor peak voltage times peak current is 20.4% better than the nominal class <formula formulatype=\"inline\"><tex Notation=\"TeX\">$E$</tex></formula>. The proposed converter and normalized design approach are verified by simulations and measurements done on an experimental prototype.",
"title": ""
},
{
"docid": "d698ce3df2f1216b7b78237dcecb0df1",
"text": "A high-efficiency CMOS rectifier circuit for UHF RFIDs was developed. The rectifier has a cross-coupled bridge configuration and is driven by a differential RF input. A differential-drive active gate bias mechanism simultaneously enables both low ON-resistance and small reverse leakage of diode-connected MOS transistors, resulting in large power conversion efficiency (PCE), especially under small RF input power conditions. A test circuit of the proposed differential-drive rectifier was fabricated with 0.18 mu m CMOS technology, and the measured performance was compared with those of other types of rectifiers. Dependence of the PCE on the input RF signal frequency, output loading conditions and transistor sizing was also evaluated. At the single-stage configuration, 67.5% of PCE was achieved under conditions of 953 MHz, - 12.5 dBm RF input and 10 KOmega output load. This is twice as large as that of the state-of-the-art rectifier circuit. The peak PCE increases with a decrease in operation frequency and with an increase in output load resistance. In addition, experimental results show the existence of an optimum transistor size in accordance with the output loading conditions. The multi-stage configuration for larger output DC voltage is also presented.",
"title": ""
},
{
"docid": "491a2805f928d081261b5a140c9aa952",
"text": "The proliferation of IoT devices that can be more easily compromised than desktop computers has led to an increase in IoT-based botnet attacks. To mitigate this threat, there is a need for new methods that detect attacks launched from compromised IoT devices and that differentiate between hours- and milliseconds-long IoT-based attacks. In this article, we propose a novel network-based anomaly detection method for the IoT called N-BaIoT that extracts behavior snapshots of the network and uses deep autoencoders to detect anomalous network traffic from compromised IoT devices. To evaluate our method, we infected nine commercial IoT devices in our lab with two widely known IoT-based botnets, Mirai and BASHLITE. The evaluation results demonstrated our proposed methods ability to accurately and instantly detect the attacks as they were being launched from the compromised IoT devices that were part of a botnet.",
"title": ""
},
{
"docid": "3bf37b20679ca6abd022571e3356e95d",
"text": "OBJECTIVE\nOur goal is to create an ontology that will allow data integration and reasoning with subject data to classify subjects, and based on this classification, to infer new knowledge on Autism Spectrum Disorder (ASD) and related neurodevelopmental disorders (NDD). We take a first step toward this goal by extending an existing autism ontology to allow automatic inference of ASD phenotypes and Diagnostic & Statistical Manual of Mental Disorders (DSM) criteria based on subjects' Autism Diagnostic Interview-Revised (ADI-R) assessment data.\n\n\nMATERIALS AND METHODS\nKnowledge regarding diagnostic instruments, ASD phenotypes and risk factors was added to augment an existing autism ontology via Ontology Web Language class definitions and semantic web rules. We developed a custom Protégé plugin for enumerating combinatorial OWL axioms to support the many-to-many relations of ADI-R items to diagnostic categories in the DSM. We utilized a reasoner to infer whether 2642 subjects, whose data was obtained from the Simons Foundation Autism Research Initiative, meet DSM-IV-TR (DSM-IV) and DSM-5 diagnostic criteria based on their ADI-R data.\n\n\nRESULTS\nWe extended the ontology by adding 443 classes and 632 rules that represent phenotypes, along with their synonyms, environmental risk factors, and frequency of comorbidities. Applying the rules on the data set showed that the method produced accurate results: the true positive and true negative rates for inferring autistic disorder diagnosis according to DSM-IV criteria were 1 and 0.065, respectively; the true positive rate for inferring ASD based on DSM-5 criteria was 0.94.\n\n\nDISCUSSION\nThe ontology allows automatic inference of subjects' disease phenotypes and diagnosis with high accuracy.\n\n\nCONCLUSION\nThe ontology may benefit future studies by serving as a knowledge base for ASD. In addition, by adding knowledge of related NDDs, commonalities and differences in manifestations and risk factors could be automatically inferred, contributing to the understanding of ASD pathophysiology.",
"title": ""
},
{
"docid": "768240033185f6464d2274181370843a",
"text": "Most of today's commercial companies heavily rely on social media and community management tools to interact with their clients and analyze their online behaviour. Nonetheless, these tools still lack evolved data mining and visualization features to tailor the analysis in order to support useful marketing decisions. We present an original methodology that aims at formalizing the marketing need of the company and develop a tool that can support it. The methodology is derived from the Cross-Industry Standard Process for Data Mining (CRISP-DM) and includes additional steps dedicated to the design and development of visualizations of mined data. We followed the methodology in two use cases with Swiss companies. First, we developed a prototype that aims at understanding the needs of tourists based on Flickr and Instagram data. In that use case, we extend the existing literature by enriching hashtags analysis methods with a semantic network based on Linked Data. Second, we analyzed internal customer data of an online discount retailer to help them define guerilla marketing measures. We report on the challenges of integrating Facebook data in the process. Informal feedback from domain experts confirms the strong potential of such advanced analytic features based on social data to inform marketing decisions.",
"title": ""
},
{
"docid": "6a03d3b4159fe35e8772d5e3e8d656c1",
"text": "In this paper, we propose a novel 3D feature point detection algorithm using Multiresolution Surface Variation (MSV). The proposed algorithm is used to extract 3D features from a cluttered, unstructured environment for use in realtime Simultaneous Localisation and Mapping (SLAM) algorithms running on a mobile robot. The salient feature of the proposed method is that, it can not only handle dense, uniform 3D point clouds (such as those obtained from Kinect or rotating 2D Lidar), but also (perhaps more importantly) handle sparse, non-uniform 3D point clouds (obtained from sensors such as 3D Lidar) and produce robust, repeatable key points that are specifically suitable for SLAM. The efficacy of the proposed method is evaluated using a dataset collected from a mobile robot with a 3D Velodyne Lidar (VLP-16) mounted on top.",
"title": ""
},
{
"docid": "a389222b13819ccd164a6a2f80e2e912",
"text": "Graphene, in its ideal form, is a two-dimensional (2D) material consisting of a single layer of carbon atoms arranged in a hexagonal lattice. The richness in morphological, physical, mechanical, and optical properties of ideal graphene has stimulated enormous scientific and industrial interest, since its first exfoliation in 2004. In turn, the production of graphene in a reliable, controllable, and scalable manner has become significantly important to bring us closer to practical applications of graphene. To this end, chemical vapor deposition (CVD) offers tantalizing opportunities for the synthesis of large-area, uniform, and high-quality graphene films. However, quite different from the ideal 2D structure of graphene, in reality, the currently available CVD-grown graphene films are still suffering from intrinsic defective grain boundaries, surface contaminations, and wrinkles, together with low growth rate and the requirement of inevitable transfer. Clearly, a gap still exits between the reality of CVD-derived graphene, especially in industrial production, and ideal graphene with outstanding properties. This Review will emphasize the recent advances and strategies in CVD production of graphene for settling these issues to bridge the giant gap. We begin with brief background information about the synthesis of nanoscale carbon allotropes, followed by the discussion of fundamental growth mechanism and kinetics of CVD growth of graphene. We then discuss the strategies for perfecting the quality of CVD-derived graphene with regard to domain size, cleanness, flatness, growth rate, scalability, and direct growth of graphene on functional substrate. Finally, a perspective on future development in the research relevant to scalable growth of high-quality graphene is presented.",
"title": ""
},
{
"docid": "435fcf5dab986fd87db6fc24fef3cc1a",
"text": "Web applications make life more convenient through on the activities. Many web applications have several kind of user input (e.g. personal information, a user's comment of commercial goods, etc.) for the activities. However, there are various vulnerabilities in input functions of web applications. It is possible to try malicious actions using free accessibility of the web applications. The attacks by exploitation of these input vulnerabilities enable to be performed by injecting malicious web code; it enables one to perform various illegal actions, such as SQL Injection Attacks (SQLIAs) and Cross Site Scripting (XSS). These actions come down to theft, replacing personal information, or phishing. Many solutions have devised for the malicious web code, such as AMNESIA [1] and SQL Check [2], etc. The methods use parser for the code, and limited to fixed and very small patterns, and are difficult to adapt to variations. Machine learning method can give leverage to cover far broader range of malicious web code and is easy to adapt to variations and changes. Therefore, we suggests adaptable classification of malicious web code by machine learning approach such as Support Vector Machine (SVM)[3], Naïve-Bayes[4], and k-Nearest Neighbor Algorithm[5] for detecting the exploitation user inputs.",
"title": ""
},
{
"docid": "6de91d6b71ff97c5564dd3e3a42092a0",
"text": "Characteristics of physical movements are indicative of infants' neuro-motor development and brain dysfunction. For instance, infant seizure, a clinical signal of brain dysfunction, could be identified and predicted by monitoring its physical movements. With the advance of wearable sensor technology, including the miniaturization of sensors, and the increasing broad application of micro- and nanotechnology, and smart fabrics in wearable sensor systems, it is now possible to collect, store, and process multimodal signal data of infant movements in a more efficient, more comfortable, and non-intrusive way. This review aims to depict the state-of-the-art of wearable sensor systems for infant movement monitoring. We also discuss its clinical significance and the aspect of system design.",
"title": ""
},
{
"docid": "2185097978553d5030252ffa9240fb3c",
"text": "The concept of celebrity culture remains remarkably undertheorized in the literature, and it is precisely this gap that this article aims to begin filling in. Starting with media culture definitions, celebrity culture is conceptualized as collections of sense-making practices whose main resources of meaning are celebrity. Consequently, celebrity cultures are necessarily plural. This approach enables us to focus on the spatial differentiation between (sub)national celebrity cultures, for which the Flemish case is taken as a central example. We gain a better understanding of this differentiation by adopting a translocal frame on culture and by focusing on the construction of celebrity cultures through the ‘us and them’ binary and communities. Finally, it is also suggested that what is termed cultural working memory improves our understanding of the remembering and forgetting of actual celebrities, as opposed to more historical figures captured by concepts such as cultural memory.",
"title": ""
},
{
"docid": "c26caff761092bc5b6af9f1c66986715",
"text": "The mechanisms used by DNN accelerators to leverage datareuse and perform data staging are known as dataflow, and they directly impact the performance and energy efficiency of DNN accelerator designs. Co-optimizing the accelerator microarchitecture and its internal dataflow is crucial for accelerator designers, but there is a severe lack of tools and methodologies to help them explore the co-optimization design space. In this work, we first introduce a set of datacentric directives to concisely specify DNN dataflows in a compiler-friendly form. Next, we present an analytical model, MAESTRO, that estimates various cost-benefit tradeoffs of a dataflow including execution time and energy efficiency for a DNN model and hardware configuration. Finally, we demonstrate the use of MAESTRO to drive a hardware design space exploration (DSE) engine. The DSE engine searched 480M designs and identified 2.5M valid designs at an average rate of 0.17M designs per second, and also identified throughputand energy-optimized designs among this set.",
"title": ""
},
{
"docid": "938f49e103d0153c82819becf96f126c",
"text": "Humans interpret texts with respect to some background information, or world knowledge, and we would like to develop automatic reading comprehension systems that can do the same. In this paper, we introduce a task and several models to drive progress towards this goal. In particular, we propose the task of rare entity prediction: given a web document with several entities removed, models are tasked with predicting the correct missing entities conditioned on the document context and the lexical resources. This task is challenging due to the diversity of language styles and the extremely large number of rare entities. We propose two recurrent neural network architectures which make use of external knowledge in the form of entity descriptions. Our experiments show that our hierarchical LSTM model performs significantly better at the rare entity prediction task than those that do not make use of external resources.",
"title": ""
},
{
"docid": "7e800094f52080194d94bdedf1d92b9c",
"text": "IMPORTANCE\nHealth care-associated infections (HAIs) account for a large proportion of the harms caused by health care and are associated with high costs. Better evaluation of the costs of these infections could help providers and payers to justify investing in prevention.\n\n\nOBJECTIVE\nTo estimate costs associated with the most significant and targetable HAIs.\n\n\nDATA SOURCES\nFor estimation of attributable costs, we conducted a systematic review of the literature using PubMed for the years 1986 through April 2013. For HAI incidence estimates, we used the National Healthcare Safety Network of the Centers for Disease Control and Prevention (CDC).\n\n\nSTUDY SELECTION\nStudies performed outside the United States were excluded. Inclusion criteria included a robust method of comparison using a matched control group or an appropriate regression strategy, generalizable populations typical of inpatient wards and critical care units, methodologic consistency with CDC definitions, and soundness of handling economic outcomes.\n\n\nDATA EXTRACTION AND SYNTHESIS\nThree review cycles were completed, with the final iteration carried out from July 2011 to April 2013. Selected publications underwent a secondary review by the research team.\n\n\nMAIN OUTCOMES AND MEASURES\nCosts, inflated to 2012 US dollars.\n\n\nRESULTS\nUsing Monte Carlo simulation, we generated point estimates and 95% CIs for attributable costs and length of hospital stay. On a per-case basis, central line-associated bloodstream infections were found to be the most costly HAIs at $45,814 (95% CI, $30,919-$65,245), followed by ventilator-associated pneumonia at $40,144 (95% CI, $36,286-$44,220), surgical site infections at $20,785 (95% CI, $18,902-$22,667), Clostridium difficile infection at $11,285 (95% CI, $9118-$13,574), and catheter-associated urinary tract infections at $896 (95% CI, $603-$1189). The total annual costs for the 5 major infections were $9.8 billion (95% CI, $8.3-$11.5 billion), with surgical site infections contributing the most to overall costs (33.7% of the total), followed by ventilator-associated pneumonia (31.6%), central line-associated bloodstream infections (18.9%), C difficile infections (15.4%), and catheter-associated urinary tract infections (<1%).\n\n\nCONCLUSIONS AND RELEVANCE\nWhile quality improvement initiatives have decreased HAI incidence and costs, much more remains to be done. As hospitals realize savings from prevention of these complications under payment reforms, they may be more likely to invest in such strategies.",
"title": ""
},
{
"docid": "44618874fe7725890fbfe9fecde65853",
"text": "Software development teams in large scale offshore enterprise development programmes are often under intense pressure to deliver high quality software within challenging time contraints. Project failures can attract adverse publicity and damage corporate reputations. Agile methods have been advocated to reduce project risks, improving both productivity and product quality. This article uses practitioner descriptions of agile method tailoring to explore large scale offshore enterprise development programmes with a focus on product owner role tailoring, where the product owner identifies and prioritises customer requirements. In globalised projects, the product owner must reconcile competing business interests, whilst generating and then prioritising large numbers of requirements for numerous development teams. The study comprises eight international companies, based in London, Bangalore and Delhi. Interviews with 46 practitioners were conducted between February 2010 and May 2012. Grounded theory was used to identify that product owners form into teams. The main contribution of this research is to describe the nine product owner team functions identified: groom, prioritiser, release master, technical architect, governor, communicator, traveller, intermediary and risk assessor. These product owner functions arbitrate between conflicting customer requirements, approve release schedules, disseminate architectural design decisions, provide technical governance and propogate information across teams. The functions identified in this research are mapped to a scrum of scrums process, and a taxonomy of the functions shows how focusing on either decision-making or information dissemination in each helps to tailor agile methods to large scale offshore enterprise development programmes.",
"title": ""
}
] |
scidocsrr
|
f143803c5edfc87ae4df5ee8ddb51f52
|
Mood Based Music Categorization System for Bollywood Music
|
[
{
"docid": "5617fc64953a2e1e781b39eb3bc273a5",
"text": "Music expresses emotion. However, analyzing the emotion in music by computer is a difficult task. Some work can be found in the literature, but the results are not satisfactory. In this paper, an emotion detection and classification system for pop music is presented. The system extracts feature values from the training music files by PsySound2 and generates a music model from the resulting feature dataset by a classification algorithm. The model is then used to detect the emotion perceived in music clips. To further improve the classification accuracy, we evaluate the significance of each music feature and remove the insignificant features. The system uses a database of 195 music clips to enhance reliability and robustness.",
"title": ""
},
{
"docid": "ec681bc427c66adfad79008840ea9b60",
"text": "With the rapid development of the Computer Science and Technology, It has become a major problem for the users that how to quickly find useful or needed information. Text categorization can help people to solve this question. The feature selection method has become one of the most critical techniques in the field of the text automatic categorization. A new method of the text feature selection based on Information Gain and Genetic Algorithm is proposed in this paper. This method chooses the feature based on information gain with the frequency of items. Meanwhile, for the information filtering systems, this method has been improved fitness function to fully consider the characteristics of weight, text and vector similarity dimension, etc. The experiment has proved that the method can reduce the dimension of text vector and improve the precision of text classification.",
"title": ""
},
{
"docid": "5c598998ffcf3d6008e8e5eed94fc396",
"text": "Music information retrieval (MIR) is an emerging research area that receives growing attention from both the research community and music industry. It addresses the problem of querying and retrieving certain types of music from large music data set. Classification is a fundamental problem in MIR. Many tasks in MIR can be naturally cast in a classification setting, such as genre classification, mood classification, artist recognition, instrument recognition, etc. Music annotation, a new research area in MIR that has attracted much attention in recent years, is also a classification problem in the general sense. Due to the importance of music classification in MIR research, rapid development of new methods, and lack of review papers on recent progress of the field, we provide a comprehensive review on audio-based classification in this paper and systematically summarize the state-of-the-art techniques for music classification. Specifically, we have stressed the difference in the features and the types of classifiers used for different classification tasks. This survey emphasizes on recent development of the techniques and discusses several open issues for future research.",
"title": ""
}
] |
[
{
"docid": "87be04b184d27c006bb06dd9906a9422",
"text": "With the significant growth of the markets for consumer electronics and various embedded systems, flash memory is now an economic solution for storage systems design. Because index structures require intensively fine-grained updates/modifications, block-oriented access over flash memory could introduce a significant number of redundant writes. This might not only severely degrade the overall performance, but also damage the reliability of flash memory. In this paper, we propose a very different approach, which can efficiently handle fine-grained updates/modifications caused by B-tree index access over flash memory. The implementation is done directly over the flash translation layer (FTL); hence, no modifications to existing application systems are needed. We demonstrate that when index structures are adopted over flash memory, the proposed methodology can significantly improve the system performance and, at the same time, reduce both the overhead of flash-memory management and the energy dissipation. The average response time of record insertions and deletions was also significantly reduced.",
"title": ""
},
{
"docid": "8fffe94d662d46b977e0312dc790f4a4",
"text": "Airline companies have increasingly employed electronic commerce (eCommerce) for strategic purposes, most notably in order to achieve long-term competitive advantage and global competitiveness by enhancing customer satisfaction as well as marketing efficacy and managerial efficiency. eCommerce has now emerged as possibly the most representative distribution channel in the airline industry. In this study, we describe an extended technology acceptance model (TAM), which integrates subjective norms and electronic trust (eTrust) into the model, in order to determine their relevance to the acceptance of airline business-to-customer (B2C) eCommerce websites (AB2CEWS). The proposed research model was tested empirically using data collected from a survey of customers who had utilized B2C eCommerce websites of two representative airline companies in South Korea (i.e., KAL and ASIANA) for the purpose of purchasing air tickets. Path analysis was employed in order to assess the significance and strength of the hypothesized causal relationships between subjective norms, eTrust, perceived ease of use, perceived usefulness, attitude toward use, and intention to reuse. Our results provide general support for an extended TAM, and also confirmed its robustness in predicting customers’ intention to reuse AB2CEWS. Valuable information was found from our results regarding the management of AB2CEWS in the formulation of airlines’ Internet marketing strategies. 2008 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "6fe77035a5101f60968a189d648e2feb",
"text": "In the past few years, Reddit -- a community-driven platform for submitting, commenting and rating links and text posts -- has grown exponentially, from a small community of users into one of the largest online communities on the Web. To the best of our knowledge, this work represents the most comprehensive longitudinal study of Reddit's evolution to date, studying both (i) how user submissions have evolved over time and (ii) how the community's allocation of attention and its perception of submissions have changed over 5 years based on an analysis of almost 60 million submissions. Our work reveals an ever-increasing diversification of topics accompanied by a simultaneous concentration towards a few selected domains both in terms of posted submissions as well as perception and attention. By and large, our investigations suggest that Reddit has transformed itself from a dedicated gateway to the Web to an increasingly self-referential community that focuses on and reinforces its own user-generated image- and textual content over external sources.",
"title": ""
},
{
"docid": "17c49edf5842fb918a3bd4310d910988",
"text": "In this paper, we present a real-time salient object detection system based on the minimum spanning tree. Due to the fact that background regions are typically connected to the image boundaries, salient objects can be extracted by computing the distances to the boundaries. However, measuring the image boundary connectivity efficiently is a challenging problem. Existing methods either rely on superpixel representation to reduce the processing units or approximate the distance transform. Instead, we propose an exact and iteration free solution on a minimum spanning tree. The minimum spanning tree representation of an image inherently reveals the object geometry information in a scene. Meanwhile, it largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm. We further introduce a boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection. Extensive evaluations show that the proposed algorithm achieves the leading performance compared to the state-of-the-art methods in terms of efficiency and accuracy.",
"title": ""
},
{
"docid": "6f6ebcdc15339df87b9499c0760936ce",
"text": "This paper outlines the design, implementation and evaluation of CAPTURE - a novel automated, continuously working cyber attack forecast system. It uses a broad range of unconventional signals from various public and private data sources and a set of signals forecasted via the Auto-Regressive Integrated Moving Average (ARIMA) model. While generating signals, auto cross correlation is used to find out the optimum signal aggregation and lead times. Generated signals are used to train a Bayesian classifier against the ground truth of each attack type. We show that it is possible to forecast future cyber incidents using CAPTURE and the consideration of the lead time could improve forecast performance.",
"title": ""
},
{
"docid": "b27dc4a19b44bf2fd13f299de8c33108",
"text": "A large proportion of the world’s population lives in remote rural areas that are geographically isolated and sparsely populated. This paper proposed a hybrid power generation system suitable for remote area application. The concept of hybridizing renewable energy sources is that the base load is to be covered by largest and firmly available renewable source(s) and other intermittent source(s) should augment the base load to cover the peak load of an isolated mini electric grid system. The study is based on modeling, simulation and optimization of renewable energy system in rural area in Sundargarh district of Orissa state, India. The model has designed to provide an optimal system conFigureuration based on hour-by-hour data for energy availability and demands. Various renewable/alternative energy sources, energy storage and their applicability in terms of cost and performance are discussed. The homer software is used to study and design the proposed hybrid alternative energy power system model. The Sensitivity analysis was carried out using Homer program. Based on simulation results, it has been found that renewable/alternative energy sources will replace the conventional energy sources and would be a feasible solution for distribution of electric power for stand alone applications at remote and distant locations.",
"title": ""
},
{
"docid": "7b989f3da78e75d9616826644d210b79",
"text": "BACKGROUND\nUse of cannabis is often an under-reported activity in our society. Despite legal restriction, cannabis is often used to relieve chronic and neuropathic pain, and it carries psychotropic and physical adverse effects with a propensity for addiction. This article aims to update the current knowledge and evidence of using cannabis and its derivatives with a view to the sociolegal context and perspectives for future research.\n\n\nMETHODS\nCannabis use can be traced back to ancient cultures and still continues in our present society despite legal curtailment. The active ingredient, Δ9-tetrahydrocannabinol, accounts for both the physical and psychotropic effects of cannabis. Though clinical trials demonstrate benefits in alleviating chronic and neuropathic pain, there is also significant potential physical and psychotropic side-effects of cannabis. Recent laboratory data highlight synergistic interactions between cannabinoid and opioid receptors, with potential reduction of drug-seeking behavior and opiate sparing effects. Legal rulings also have changed in certain American states, which may lead to wider use of cannabis among eligible persons.\n\n\nCONCLUSIONS\nFamily physicians need to be cognizant of such changing landscapes with a practical knowledge on the pros and cons of medical marijuana, the legal implications of its use, and possible developments in the future.",
"title": ""
},
{
"docid": "b05fc1f939ff50dc07dbbc170cd28478",
"text": "A compact multiresonant antenna for octaband LTE/WWAN operation in the internal smartphone applications is proposed and discussed in this letter. With a small volume of 15×25×4 mm3, the presented antenna comprises two direct feeding strips and a chip-inductor-loaded two-branch shorted strip. The two direct feeding strips can provide two resonant modes at around 1750 and 2650 MHz, and the two-branch shorted strip can generate a double-resonance mode at about 725 and 812 MHz. Moreover, a three-element bandstop matching circuit is designed to generate an additional resonance for bandwidth enhancement of the lower band. Ultimately, up to five resonances are achieved to cover the desired 704-960- and 1710-2690-MHz bands. Simulated and measured results are presented to demonstrate the validity of the proposed antenna.",
"title": ""
},
{
"docid": "c3112126fa386710fb478dcfe978630e",
"text": "In recent years, distributed intelligent microelectromechanical systems (DiMEMSs) have appeared as a new form of distributed embedded systems. DiMEMSs contain thousands or millions of removable autonomous devices, which will collaborate with each other to achieve the final target of the whole system. Programming such systems is becoming an extremely difficult problem. The difficulty is due not only to their inherent nature of distributed collaboration, mobility, large scale, and limited resources of their devices (e.g., in terms of energy, memory, communication, and computation) but also to the requirements of real-time control and tolerance for uncertainties such as inaccurate actuation and unreliable communications. As a result, existing programming languages for traditional distributed and embedded systems are not suitable for DiMEMSs. In this article, we first introduce the origin and characteristics of DiMEMSs and then survey typical implementations of DiMEMSs and related research hotspots. Finally, we propose a real-time programming framework that can be used to design new real-time programming languages for DiMEMSs. The framework is composed of three layers: a real-time programming model layer, a compilation layer, and a runtime system layer. The design challenges and requirements of these layers are investigated. The framework is then discussed in further detail and suggestions for future research are given.",
"title": ""
},
{
"docid": "629c6c7ca3db9e7cad2572c319ec52f0",
"text": "Recent research on pornography suggests that perception of addiction predicts negative outcomes above and beyond pornography use. Research has also suggested that religious individuals are more likely to perceive themselves to be addicted to pornography, regardless of how often they are actually using pornography. Using a sample of 686 unmarried adults, this study reconciles and expands on previous research by testing perceived addiction to pornography as a mediator between religiosity and relationship anxiety surrounding pornography. Results revealed that pornography use and religiosity were weakly associated with higher relationship anxiety surrounding pornography use, whereas perception of pornography addiction was highly associated with relationship anxiety surrounding pornography use. However, when perception of pornography addiction was inserted as a mediator in a structural equation model, pornography use had a small indirect effect on relationship anxiety surrounding pornography use, and perception of pornography addiction partially mediated the association between religiosity and relationship anxiety surrounding pornography use. By understanding how pornography use, religiosity, and perceived pornography addiction connect to relationship anxiety surrounding pornography use in the early relationship formation stages, we hope to improve the chances of couples successfully addressing the subject of pornography and mitigate difficulties in romantic relationships.",
"title": ""
},
{
"docid": "7e3dfe0820123cb1da9857b809df4ae4",
"text": "This paper introduces an overview of Chinese Spelling Check task at SIGHAN Bake-off 2013. We describe all aspects of the task for Chinese spelling check, consisting of task description, data preparation, performance metrics, and evaluation results. This bake-off contains two subtasks, i.e., error detection and error correction. We evaluate the systems that can automatically point out the spelling errors and provide the corresponding corrections in students’ essays, summarize the performance of all participants’ submitted results, and discuss some advanced issues. The hope is that through such evaluation campaigns, more advanced Chinese spelling check techniques will be emerged.",
"title": ""
},
{
"docid": "0ef4cf0b46b43670a3d9554aba6e2d89",
"text": "lthough banks’ lending activities draw the attention of supervisors, lawmakers, researchers, and the press, a very substantial and growing portion of the industry’s total revenue is received in the form of fee income. The amount of fee, or noninterest, income earned by the banking sector suggests that the significance of payments services has been understated or overlooked. A lack of good information about the payments area may partly explain the failure to gauge the size of this business line correctly. In reports to supervisory agencies, banking organizations provide data relating primarily to their safety and soundness. By the design of the reports, banks transmit information on profitability, capital, and the size and condition of the loan portfolio. Limited information can be extracted from regulatory reports on individual business lines; in fact, these reports imply that banks receive just 7 percent of their net revenue from payments services. A narrow definition of payments, or transactions, services may also contribute to a poor appreciation of this banking function. While checking accounts are universally recognized as a payments service, credit cards, corporate trust accounts, and securities processing should also be treated as parts of a bank’s payments business. The common but limited definition of the payments area reflects the tight focus of banking research on lending and deposit taking. In theoretical studies, economists explain the prominence of commercial banks in the financial sector in terms of these two functions. First, by developing their skills in screening applicants, monitoring borrowers, and obtaining repayment, commercial banks became the dominant lender to relatively small-sized borrowers. Second, because investors demand protection against the risk that they may need liquidity earlier than anticipated, bank deposits are a special and highly useful financial instrument. While insightful, neither rationale explains why A",
"title": ""
},
{
"docid": "43a2d0cbfc79cc51d4c25fd17fe9bebb",
"text": "BACKGROUND\nProgress in the field of biology and biochemistry has led to the discovery of numerous bioactive peptides and proteins in the last few decades. Delivery of therapeutic proteins/peptides has received a considerable amount of attention in recent years.\n\n\nMETHODS\nIn this study, a two-step desolvation method was used to produce biodegradable hydrophilic gelatin nanoparticles (GNP) as a delivery system of protein model (BSA). The size and shape of the nanoparticles were examined by dynamic light scattering and scanning electron microscopy.\n\n\nRESULTS\nParticles with a mean diameter of 200-300 nm were produced and the percentage of entrapment efficiency was found to be 87.4. The optimum amount of theoretical BSA loading was obtained, the release of BSA was monitored in vitro, and the mechanism of release was studied. The BSA release profile showed a biphasic modulation characterized by an initial, relatively rapid release period, followed by a slower release phase.\n\n\nCONCLUSION\nResults show that the two-step desolvation is an appropriate method for preparing GNP as a delivery vehicle for BSA.",
"title": ""
},
{
"docid": "db81451679bf1fc215acce0ca05f7aee",
"text": "Decision analytics commonly focuses on the text mining of financial news sources in order to provide managerial decision support and to predict stock market movements. Existing predictive frameworks almost exclusively apply traditional machine learning methods, whereas recent research indicates that traditional machine learning methods are not sufficiently capable of extracting suitable features and capturing the non-linear nature of complex tasks. As a remedy, novel deep learning models aim to overcome this issue by extending traditional neural network models with additional hidden layers. Indeed, deep learning has been shown to outperform traditional methods in terms of predictive performance. In this paper, we adapt the novel deep learning technique to financial decision support. In this instance, we aim to predict the direction of stock movements following financial disclosures. As a result, we show how deep learning can outperform the accuracy of random forests as a benchmark for machine learning by 5.66 %.",
"title": ""
},
{
"docid": "96dec027591a118cbc6a94d7fc52ade8",
"text": "A new approach based on interval analysis is developed to find the global minimum-jerk (MJ) trajectory of a robot manipulator within a joint space scheme using cubic splines. MJ trajectories are desirable for their similarity to human joint movements and for their amenability to path tracking and to limit robot vibrations. This makes them attractive choices for robotic applications, in spite of the fact that the manipulator dynamics is not taken into account. Cubic splines are used in a framework that assures overall continuity of velocities and accelerations in the robot movement. The resulting MJ trajectory planning is shown to be a global constrained minimax optimization problem. This is solved by a newly devised algorithm based on interval analysis and proof of convergence with certainty to an arbitrarily good global solution is provided. The proposed planning method is applied to an example regarding a six-joint manipulator and comparisons with an alternative MJ planner are exposed.",
"title": ""
},
{
"docid": "a5052a27ebbfb07b02fa18b3d6bff6fc",
"text": "Popular techniques for domain adaptation such as the feature augmentation method of Daumé III (2009) have mostly been considered for sparse binary-valued features, but not for dense realvalued features such as those used in neural networks. In this paper, we describe simple neural extensions of these techniques. First, we propose a natural generalization of the feature augmentation method that uses K + 1 LSTMs where one model captures global patterns across all K domains and the remaining K models capture domain-specific information. Second, we propose a novel application of the framework for learning shared structures by Ando and Zhang (2005) to domain adaptation, and also provide a neural extension of their approach. In experiments on slot tagging over 17 domains, our methods give clear performance improvement over Daumé III (2009) applied on feature-rich CRFs.",
"title": ""
},
{
"docid": "80f88101ea4d095a0919e64b7db9cadb",
"text": "The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets.",
"title": ""
},
{
"docid": "486e3f5614f69f60d8703d8641c73416",
"text": "The Great East Japan Earthquake and Tsunami drastically changed Japanese society, and the requirements for ICT was completely redefined. After the disaster, it was impossible for disaster victims to utilize their communication devices, such as cellular phones, tablet computers, or laptop computers, to notify their families and friends of their safety and confirm the safety of their loved ones since the communication infrastructures were physically damaged or lacked the energy necessary to operate. Due to this drastic event, we have come to realize the importance of device-to-device communications. With the recent increase in popularity of D2D communications, many research works are focusing their attention on a centralized network operated by network operators and neglect the importance of decentralized infrastructureless multihop communication, which is essential for disaster relief applications. In this article, we propose the concept of multihop D2D communication network systems that are applicable to many different wireless technologies, and clarify requirements along with introducing open issues in such systems. The first generation prototype of relay by smartphone can deliver messages using only users' mobile devices, allowing us to send out emergency messages from disconnected areas as well as information sharing among people gathered in evacuation centers. The success of field experiments demonstrates steady advancement toward realizing user-driven networking powered by communication devices independent of operator networks.",
"title": ""
},
{
"docid": "1afd50a91b67bd1eab0db1c2a19a6c73",
"text": "In this paper we present syntactic characterization of temporal formulas that express various properties of interest in the verification of concurrent programs. Such a characterization helps us in choosing the right techniques for proving correctness with respect to these properties. The properties that we consider include safety properties, liveness properties and fairness properties. We also present algorithms for checking if a given temporal formula expresses any of these properties.",
"title": ""
},
{
"docid": "fe62e3a9acfe5009966434aa1f39099d",
"text": "Previous studies have found a subgroup of people with autism or Asperger Syndrome who pass second-order tests of theory of mind. However, such tests have a ceiling in developmental terms corresponding to a mental age of about 6 years. It is therefore impossible to say if such individuals are intact or impaired in their theory of mind skills. We report the performance of very high functioning adults with autism or Asperger Syndrome on an adult test of theory of mind ability. The task involved inferring the mental state of a person just from the information in photographs of a person's eyes. Relative to age-matched normal controls and a clinical control group (adults with Tourette Syndrome), the group with autism and Asperger Syndrome were significantly impaired on this task. The autism and Asperger Syndrome sample was also impaired on Happé's strange stories tasks. In contrast, they were unimpaired on two control tasks: recognising gender from the eye region of the face, and recognising basic emotions from the whole face. This provides evidence for subtle mindreading deficits in very high functioning individuals on the autistic continuum.",
"title": ""
}
] |
scidocsrr
|
27707f46e6686967d73291a0c16ba8de
|
Applying relay attacks to Google Wallet
|
[
{
"docid": "23cc8b190e9de5177cccf2f918c1ad45",
"text": "NFC is a standardised technology providing short-range RFID communication channels for mobile devices. Peer-to-peer applications for mobile devices are receiving increased interest and in some cases these services are relying on NFC communication. It has been suggested that NFC systems are particularly vulnerable to relay attacks, and that the attacker’s proxy devices could even be implemented using off-the-shelf NFC-enabled devices. This paper describes how a relay attack can be implemented against systems using legitimate peer-to-peer NFC communication by developing and installing suitable MIDlets on the attacker’s own NFC-enabled mobile phones. The attack does not need to access secure program memory nor use any code signing, and can use publicly available APIs. We go on to discuss how relay attack countermeasures using device location could be used in the mobile environment. These countermeasures could also be applied to prevent relay attacks on contactless applications using ‘passive’ NFC on mobile phones.",
"title": ""
}
] |
[
{
"docid": "45879e14f7fe6fe527739d74595b46dd",
"text": "Malware is one of the most damaging security threats facing the Internet today. Despite the burgeoning literature, accurate detection of malware remains an elusive and challenging endeavor due to the increasing usage of payload encryption and sophisticated obfuscation methods. Also, the large variety of malware classes coupled with their rapid proliferation and polymorphic capabilities and imperfections of real-world data (noise, missing values, etc) continue to hinder the use of more sophisticated detection algorithms. This paper presents a novel machine learning based framework to detect known and newly emerging malware at a high precision using layer 3 and layer 4 network traffic features. The framework leverages the accuracy of supervised classification in detecting known classes with the adaptability of unsupervised learning in detecting new classes. It also introduces a tree-based feature transformation to overcome issues due to imperfections of the data and to construct more informative features for the malware detection task. We demonstrate the effectiveness of the framework using real network data from a large Internet service provider.",
"title": ""
},
{
"docid": "4f80cfe0b34b8c8b18ea8108578e1607",
"text": "Hypnosis has been demonstrated to reduce analogue pain, and studies on the mechanisms of laboratory pain reduction have provided useful applications to clinical populations. Studies showing central nervous system activity during hypnotic procedures offer preliminary information concerning possible physiological mechanisms of hypnotic analgesia. Randomized controlled studies with clinical populations indicate that hypnosis has a reliable and significant impact on acute procedural pain and chronic pain conditions. Methodological issues of this body of research are discussed, as are methods to better integrate hypnosis into comprehensive pain treatment.",
"title": ""
},
{
"docid": "da36a172f042ff9ef1a4fdf9ccc0f0a8",
"text": "The Human Brain Project (HBP) is a candidate project in the European Union’s FET Flagship Program, funded by the ICT Program in the Seventh Framework Program. The project will develop a new integrated strategy for understanding the human brain and a novel research platform that will integrate all the data and knowledge we can acquire about the structure and function of the brain and use it to build unifying models that can be validated by simulations running on supercomputers. The project will drive the development of supercomputing for the life sciences, generate new neuroscientific data as a benchmark for modeling, develop radically new tools for informatics, modeling and simulation, and build virtual laboratories for collaborative basic and clinical studies, drug simulation and virtual prototyping of neuroprosthetic, neuromorphic, and robotic devices. © 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of [name organizer]",
"title": ""
},
{
"docid": "dc98ddb6033ca1066f9b0ba5347a3d0c",
"text": "Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. The result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. This work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.",
"title": ""
},
{
"docid": "9f4db80a3474bf5651ff47057a4b2ae5",
"text": "With the emergence of free and open source software (F/OSS) projects (e.g. Linux) as serious contenders to well-established proprietary software, advocates of F/OSS are quick to generalize the superiority of this approach to software development. On the other hand, some wellestablished software development firms view F/OSS as a threat and vociferously refute the claims of F/OSS advocates. This article represents a tutorial on F/OSS that tries objectively to identify and present open source software’s concepts, benefits, and challenges. From our point of view, F/OSS is more than just software. We conceptualize it as an IPO system that consists of the license as the boundary of the system, the community that provides the input, the development process, and the software as the output. After describing the evolution and definition of F/OSS, we identify three approaches to benefiting from F/OSS that center on (1) the software, (2) the community, and (3) the license respectively. Each approach is fit for a specific situation and provides a unique set of benefits and challenges. We further illustrate our points by refuting common misconceptions associated with F/OSS based upon our conceptual framework.",
"title": ""
},
{
"docid": "ceb9e37cee390fac163154b70808f89d",
"text": "This study extends the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model to investigate factors affecting the acceptance and use of a social networking service (SNS) called Instagram. The UTAUT2 model is modified to better suit the context of SNSs by replacing the price value construct with self-congruence. Furthermore, we explore the effects of behavioral intention and use behavior on \"user indegree\" defined as the number of people who follow an SNS user. The results of the survey study largely support the hypothesized model in the context of Instagram. The findings contribute to previous knowledge by demonstrating the important roles of hedonic motivation and habit in consumer acceptance and use of SNSs, and by providing novel insights into how users can attract followers within the social networks.",
"title": ""
},
{
"docid": "691f07bc4f1339d0915e98b76a6b6da1",
"text": "Malicious Web pages that launch drive-by-download attacks on Web browsers have increasingly become a problem in recent years. High-interaction client honeypots are security devices that can detect these malicious Web pages on a network. However, high-interaction client honeypots are both resource-intensive and unable to handle the increasing array of vulnerable clients. This paper presents a novel classification method for detecting malicious Web pages that involves inspecting the underlying server relationships. Because of the unique structure of malicious front-end Web pages and centralized exploit servers, merely counting the number of domain name extensions and Domain Name System (DNS) servers used to resolve the host names of all Web servers involved in rendering a page is sufficient to determine whether a Web page is malicious or benign, independent of the vulnerable Web browser targeted by these pages. Combining high-interaction client honeypots and this new classification method into a hybrid system leads to performance improvements.",
"title": ""
},
{
"docid": "46b13741add1385269e18de2f8faf1f8",
"text": "It has been suggested that there are two forms of narcissism: a grandiose subtype and a vulnerable subtype. Although these forms of narcissism share certain similarities, it is believed that these subtypes may differ in the domains upon which their self-esteem is based. To explore this possibility, the present study examined the associations between these narcissistic subtypes and domain-specific contingencies of self-worth. The results show that vulnerable narcissism was positively associated with contingencies of self-worth across a variety of domains. In contrast, the associations between grandiose narcissism and domain-specific contingencies of self-worth were more complex and included both positive and negative relationships. These results provide additional support for the distinction between grandiose and vulnerable narcissism by showing that the domains of contingent self-esteem associated with grandiose narcissism may be more limited in scope than those associated with vulnerable narcissism.",
"title": ""
},
{
"docid": "a32d80b0b446f91832f92ca68597821d",
"text": "PURPOSE\nWe define the cause of the occurrence of Peyronie's disease.\n\n\nMATERIALS AND METHODS\nClinical evaluation of a large number of patients with Peyronie's disease, while taking into account the pathological and biochemical findings of the penis in patients who have been treated by surgery, has led to an understanding of the relationship of the anatomical structure of the penis to its rigidity during erection, and how the effect of the stress imposed upon those structures during intercourse is modified by the loss of compliance resulting from aging of the collagen composing those structures. Peyronie's disease occurs most frequently in middle-aged men, less frequently in older men and infrequently in younger men who have more elastic tissues. During erection, when full tumescence has occurred and the elastic tissues of the penis have reached the limit of their compliance, the strands of the septum give vertical rigidity to the penis. Bending the erect penis out of column stresses the attachment of the septal strands to the tunica albuginea.\n\n\nRESULTS\nPlaques of Peyronie's disease are found where the strands of the septum are attached in the dorsal or ventral aspect of the penis. The pathological scar in the tunica albuginea of the corpora cavernosa in Peyronie's disease is characterized by excessive collagen accumulation, fibrin deposition and disordered elastic fibers in the plaque.\n\n\nCONCLUSIONS\nWe suggest that Peyronie's disease results from repetitive microvascular injury, with fibrin deposition and trapping in the tissue space that is not adequately cleared during the normal remodeling and repair of the tear in the tunica. Fibroblast activation and proliferation, enhanced vessel permeability and generation of chemotactic factors for leukocytes are stimulated by fibrin deposited in the normal process of wound healing. However, in Peyronie's disease the lesion fails to resolve either due to an inability to clear the original stimulus or due to further deposition of fibrin subsequent to repeated trauma. Collagen is also trapped and pathological fibrosis ensues.",
"title": ""
},
{
"docid": "41a4e84cf6dfc073c962dd9c6c13d6fe",
"text": "Pteridinone-based Toll-like receptor 7 (TLR7) agonists were identified as potent and selective alternatives to the previously reported adenine-based agonists, leading to the discovery of GS-9620. Analogues were optimized for the immunomodulatory activity and selectivity versus other TLRs, based on differential induction of key cytokines including interferon α (IFN-α) and tumor necrosis factor α (TNF-α). In addition, physicochemical properties were adjusted to achieve desirable in vivo pharmacokinetic and pharmacodynamic properties. GS-9620 is currently in clinical evaluation for the treatment of chronic hepatitis B (HBV) infection.",
"title": ""
},
{
"docid": "9e90e23aee87a181ca32a494e5d620e0",
"text": "BACKGROUND\nThe rapid growth in the use of mobile phone applications (apps) provides the opportunity to increase access to evidence-based mental health care.\n\n\nOBJECTIVE\nOur goal was to systematically review the research evidence supporting the efficacy of mental health apps for mobile devices (such as smartphones and tablets) for all ages.\n\n\nMETHODS\nA comprehensive literature search (2008-2013) in MEDLINE, Embase, the Cochrane Central Register of Controlled Trials, PsycINFO, PsycTESTS, Compendex, and Inspec was conducted. We included trials that examined the effects of mental health apps (for depression, anxiety, substance use, sleep disturbances, suicidal behavior, self-harm, psychotic disorders, eating disorders, stress, and gambling) delivered on mobile devices with a pre- to posttest design or compared with a control group. The control group could consist of wait list, treatment-as-usual, or another recognized treatment.\n\n\nRESULTS\nIn total, 5464 abstracts were identified. Of those, 8 papers describing 5 apps targeting depression, anxiety, and substance abuse met the inclusion criteria. Four apps provided support from a mental health professional. Results showed significant reductions in depression, stress, and substance use. Within-group and between-group intention-to-treat effect sizes ranged from 0.29-2.28 and 0.01-0.48 at posttest and follow-up, respectively.\n\n\nCONCLUSIONS\nMental health apps have the potential to be effective and may significantly improve treatment accessibility. However, the majority of apps that are currently available lack scientific evidence about their efficacy. The public needs to be educated on how to identify the few evidence-based mental health apps available in the public domain to date. Further rigorous research is required to develop and test evidence-based programs. Given the small number of studies and participants included in this review, the high risk of bias, and unknown efficacy of long-term follow-up, current findings should be interpreted with caution, pending replication. Two of the 5 evidence-based mental health apps are currently commercially available in app stores.",
"title": ""
},
{
"docid": "6ceab65cc9505cf21824e9409cf67944",
"text": "Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on largescale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in terms of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level",
"title": ""
},
{
"docid": "8e02a76799f72d86e7240384bea563fd",
"text": "We have developed the suspended-load backpack, which converts mechanical energy from the vertical movement of carried loads (weighing 20 to 38 kilograms) to electricity during normal walking [generating up to 7.4 watts, or a 300-fold increase over previous shoe devices (20 milliwatts)]. Unexpectedly, little extra metabolic energy (as compared to that expended carrying a rigid backpack) is required during electricity generation. This is probably due to a compensatory change in gait or loading regime, which reduces the metabolic power required for walking. This electricity generation can help give field scientists, explorers, and disaster-relief workers freedom from the heavy weight of replacement batteries and thereby extend their ability to operate in remote areas.",
"title": ""
},
{
"docid": "9a4bdfe80a949ec1371a917585518ae4",
"text": "This article presents the event calculus, a logic-based formalism for representing actions and their effects. A circumscriptive solution to the frame problem is deployed which reduces to monotonic predicate completion. Using a number of benchmark examples from the literature, the formalism is shown to apply to a variety of domains, including those featuring actions with indirect effects, actions with non-deterministic effects, concurrent actions, and continuous change.",
"title": ""
},
{
"docid": "62e4e376170a649efd578d968392a12b",
"text": "This paper presents a new algorithm to identify Bengali Sign Language (BdSL) for recognizing 46 hand gestures, including 9 gestures for 11 vowels, 28 gestures for 39 consonants and 9 gestures for 9 numerals according to the similarity of pronunciation. The image was first re-sized and then converted to binary format to crop the region of interest by using only top-most, left-most and right-most white pixels. The positions of the finger-tips were found by applying a fingertip finder algorithm. Eleven features were extracted from each image to train a multilayered feedforward neural network with a back-propagation training algorithm. Distance between the centroid of the hand region and each finger tip was calculated along with the angles between each fingertip and horizontal x axis that crossed the centroid. A database of 2300 images of Bengali signs was constructed to evaluate the effectiveness of the proposed system, where 70%, 15% and 15% images were used for training, testing, and validating, respectively. Experimental result showed an average of 88.69% accuracy in recognizing BdSL which is very much promising compare to other existing methods.",
"title": ""
},
{
"docid": "5c111a5a30f011e4f47fb9e2041644f9",
"text": "Since the audio recapture can be used to assist audio splicing, it is important to identify whether a suspected audio recording is recaptured or not. However, few works on such detection have been reported. In this paper, we propose an method to detect the recaptured audio based on deep learning and we investigate two deep learning techniques, i.e., neural network with dropout method and stack auto-encoders (SAE). The waveform samples of audio frame is directly used as the input for the deep neural network. The experimental results show that error rate around 7.5% can be achieved, which indicates that our proposed method can successfully discriminate recaptured audio and original audio.",
"title": ""
},
{
"docid": "8b6f1d896068e1c9849fc854a3451b97",
"text": "The Integrated Modular Motor Drive (IMMD) concept provides a promising approach to integrating motor drive electronics into the machine housing by modularizing both the machine stator and the power converter. The basic module of the IMMD consists of a stator pole-piece wound with a concentrated coil and fitted with a dedicated power converter unit. This paper addresses several of the challenges associated with the design of an IMMD power converter module. In particular, the issues associated with configuring the dc bus capacitance to meet the demanding size requirements of the power converter are addressed, including the effect of dc bus connections. Experimental results for converter operation are presented, and opportunities to further reduce the capacitor size using active control strategies are discussed.",
"title": ""
},
{
"docid": "2e27078279131bf08b3f1cb060586599",
"text": "The QTW VTOL UAV, which features tandem tilt wings with propellers mounted at the mid-span of each wing, is one of the most promising UAV configurations, having both VTOL capability and high cruise performance. A six-degree-of-freedom dynamic simulation model covering the full range of the QTW flight envelope was developed and a flight control system including a transition schedule and a stability and control augmentation system (SCAS) was designed. The flight control system was installed in a small prototype QTW and a full transition flight test including vertical takeoff, accelerating transition, cruise, decelerating transition and hover landing was successfully accomplished.",
"title": ""
},
{
"docid": "7677f90e0d949488958b27422bdffeb5",
"text": "This vignette is a slightly modified version of Koenker (2008a). It was written in plain latex not Sweave, but all data and code for the examples described in the text are available from either the JSS website or from my webpages. Quantile regression for censored survival (duration) data offers a more flexible alternative to the Cox proportional hazard model for some applications. We describe three estimation methods for such applications that have been recently incorporated into the R package quantreg: the Powell (1986) estimator for fixed censoring, and two methods for random censoring, one introduced by Portnoy (2003), and the other by Peng and Huang (2008). The Portnoy and Peng-Huang estimators can be viewed, respectively, as generalizations to regression of the Kaplan-Meier and NelsonAalen estimators of univariate quantiles for censored observations. Some asymptotic and simulation comparisons are made to highlight advantages and disadvantages of the three methods.",
"title": ""
}
] |
scidocsrr
|
5b0a99d27af3f7ca80a280ff3443a0f0
|
Data fusion in intelligent transportation systems: Progress and challenges - A survey
|
[
{
"docid": "2b1048b3bdb52c006437b18d7b458871",
"text": "A road interpretation module is presented! which is part of a real-time vehicle guidance system for autonomous driving. Based on bifocal computer vision, the complete system is able to drive a vehicle on marked or unmarked roads, to detect obstacles, and to react appropriately. The hardware is a network of 23 transputers, organized in modular clusters. Parallel modules performing image analysis, feature extraction, object modelling, sensor data integration and vehicle control, are organized in hierarchical levels. The road interpretation module is based on the principle of recursive state estimation by Kalman filter techniques. Internal 4-D models of the road, vehicle position, and orientation are updated using data produced by the image-processing module. The system has been implemented on two vehicles (VITA and VaMoRs) and demonstrated in the framework of PROMETHEUS, where the ability of autonomous driving through narrow curves and of lane changing were demonstrated. Meanwhile, the system has been tested on public roads in real traffic situations, including travel on a German Autobahn autonomously at speeds up to 85 km/h. Belcastro, C.M., Fischl, R., and M. Kam. “Fusion Techniques Using Distributed Kalman Filtering for Detecting Changes in Systems.” Proceedings of the 1991 American Control Conference. 26-28 June 1991: Boston, MA. American Autom. Control Council, 1991. Vol. 3: (2296-2298).",
"title": ""
}
] |
[
{
"docid": "216e38bb5e6585099e949572f7645ebf",
"text": "The graviperception of the hypotrichous ciliate Stylonychia mytilus was investigated using electrophysiological methods and behavioural analysis. It is shown that Stylonychia can sense gravity and thereby compensates sedimentation rate by a negative gravikinesis. The graviresponse consists of a velocity-regulating physiological component (negative gravikinesis) and an additional orientational component. The latter is largely based on a physical mechanism but might, in addition, be affected by the frequency of ciliary reversals, which is under physiological control. We show that the external stimulus of gravity is transformed to a physiological signal, activating mechanosensitive calcium and potassium channels. Earlier electrophysiological experiments revealed that these ion channels are distributed in the manner of two opposing gradients over the surface membrane. Here, we show, for the first time, records of gravireceptor potentials in Stylonychia that are presumably based on this two-gradient system of ion channels. The gravireceptor potentials had maximum amplitudes of approximately 4 mV and slow activation characteristics (0.03 mV s(-1)). The presumptive number of involved graviperceptive ion channels was calculated and correlates with the analysis of the locomotive behaviour.",
"title": ""
},
{
"docid": "734825ba0795a214c0cdf4c668ac7967",
"text": "Advances in microbial methods have demonstrated that microorganisms globally are the dominating organisms both concerning biomass and diversity. Their functional and genetic potential may exceed that of higher organisms. Studies of bacterial diversity have been hampered by their dependence on phenotypic characterization of bacterial isolates. Molecular techniques have provided the tools for analyzing the entire bacterial community including those which we are not able to grow in the laboratory. Reassociation analysis of DNA isolated directly from the bacteria in pristine soil and marine sediment samples revealed that such environments contained in the order of 10 000 bacterial types. The diversity of the total bacterial community was approximately 170 times higher than the diversity of the collection of bacterial isolates from the same soil. The culturing conditions therefore select for a small and probably skewed fraction of the organisms present in the environment. Environmental stress and agricultural management reduce the bacterial diversity. With the reassociation technique it was demonstrated that in heavily polluted fish farm sediments the diversity was reduced by a factor of 200 as compared to pristine sediments. Here we discuss some molecular mechanisms and environmental factors controlling the bacterial diversity in soil and sediments.",
"title": ""
},
{
"docid": "3eef0b6dee8d62e58a9369ed1e03d8ba",
"text": "Semi-supervised learning methods based on generative adversarial networks (GANs) obtained strong empirical results, but it is not clear 1) how the discriminator benefits from joint training with a generator, and 2) why good semi-supervised classification performance and a good generator cannot be obtained at the same time. Theoretically we show that given the discriminator objective, good semisupervised learning indeed requires a bad generator, and propose the definition of a preferred generator. Empirically, we derive a novel formulation based on our analysis that substantially improves over feature matching GANs, obtaining state-of-the-art results on multiple benchmark datasets2.",
"title": ""
},
{
"docid": "0cb3cdb1e44fd9171156ad46fdf2d2ed",
"text": "In this paper, from the viewpoint of scene under standing, a three-layer Bayesian hierarchical framework (BHF) is proposed for robust vacant parking space detection. In practice, the challenges of vacant parking space inference come from dramatic luminance variations, shadow effect, perspective distortion, and the inter-occlusion among vehicles. By using a hidden labeling layer between an observation layer and a scene layer, the BHF provides a systematic generative structure to model these variations. In the proposed BHF, the problem of luminance variations is treated as a color classification problem and is tack led via a classification process from the observation layer to the labeling layer, while the occlusion pattern, perspective distortion, and shadow effect are well modeled by the relationships between the scene layer and the labeling layer. With the BHF scheme, the detection of vacant parking spaces and the labeling of scene status are regarded as a unified Bayesian optimization problem subject to a shadow generation model, an occlusion generation model, and an object classification model. The system accuracy was evaluated by using outdoor parking lot videos captured from morning to evening. Experimental results showed that the proposed framework can systematically determine the vacant space number, efficiently label ground and car regions, precisely locate the shadowed regions, and effectively tackle the problem of luminance variations.",
"title": ""
},
{
"docid": "976dc6591e21e96ddb9ac6133a47e2ec",
"text": "Object detection is a fundamental problem in image understanding. One popular solution is the R-CNN framework [15] and its fast versions [14, 27]. They decompose the object detection problem into two cascaded easier tasks: 1) generating object proposals from images, 2) classifying proposals into various object categories. Despite that we are handling with two relatively easier tasks, they are not solved perfectly and there's still room for improvement. In this paper, we push the \"divide and conquer\" solution even further by dividing each task into two sub-tasks. We call the proposed method \"CRAFT\" (Cascade Regionproposal-network And FasT-rcnn), which tackles each task with a carefully designed network cascade. We show that the cascade structure helps in both tasks: in proposal generation, it provides more compact and better localized object proposals, in object classification, it reduces false positives (mainly between ambiguous categories) by capturing both inter-and intra-category variances. CRAFT achieves consistent and considerable improvement over the state-of the-art on object detection benchmarks like PASCAL VOC 07/12 and ILSVRC.",
"title": ""
},
{
"docid": "c7b92058dd9aee5217725a55ca1b56ff",
"text": "For the autonomous navigation of mobile robots, robust and fast visual localization is a challenging task. Although some end-to-end deep neural networks for 6-DoF Visual Odometry (VO) have been reported with promising results, they are still unable to solve the drift problem in long-range navigation. In this paper, we propose the deep global-relative networks (DGRNets), which is a novel global and relative fusion framework based on Recurrent Convolutional Neural Networks (RCNNs). It is designed to jointly estimate global pose and relative localization from consecutive monocular images. DGRNets include feature extraction sub-networks for discriminative feature selection, RCNNs-type relative pose estimation subnetworks for smoothing the VO trajectory and RCNNs-type global pose regression sub-networks for avoiding the accumulation of pose errors. We also propose two loss functions: the first one consists of Cross Transformation Constraints (CTC) that utilize geometric consistency of the adjacent frames to train a more accurate relative sub-networks, and the second one is composed of CTC and Mean Square Error (MSE) between the predicted pose and ground truth used to train the end-to-end DGRNets. The competitive experiments on indoor Microsoft 7-Scenes and outdoor KITTI dataset show that our DGRNets outperform other learning-based monocular VO methods in terms of pose accuracy.",
"title": ""
},
{
"docid": "1c5f53fe8d663047a3a8240742ba47e4",
"text": "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLAHANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLAHANS. Our analysis and experiments show that with the assistance of CLAHANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing clustering methods show that CLAHANS is the most efficient.",
"title": ""
},
{
"docid": "45475cd9bd2e71699590bbdbebd83829",
"text": "Very little is known about computer gamers' playing experience. Most social scientific research has treated gaming as an undifferentiated activity associated with various factors outside the gaming context. This article considers computer games as behavior settings worthy of social scientific investigation in their own right and contributes to a better understanding of computer gaming as a complex, context-dependent, goal-directed activity. The results of an exploratory interview-based study of computer gaming within the \"first-person shooter\" (FPS) game genre are reported. FPS gaming is a fast-paced form of goal-directed activity that takes place in complex, dynamic behavioral environments where players must quickly make sense of changes in their immediate situation and respond with appropriate actions. Gamers' perceptions and evaluations of various aspects of the FPS gaming situation are documented, including positive and negative aspects of game interfaces, map environments, weapons, computer-generated game characters (bots), multiplayer gaming on local area networks (LANs) or the internet, and single player gaming. The results provide insights into the structure of gamers' mental models of the FPS genre by identifying salient categories of their FPS gaming experience. It is proposed that aspects of FPS games most salient to gamers were those perceived to be most behaviorally relevant to goal attainment, and that the evaluation of various situational stimuli depended on the extent to which they were perceived either to support or to hinder goal attainment. Implications for the design of FPS games that players experience as challenging, interesting, and fun are discussed.",
"title": ""
},
{
"docid": "ce167e13e5f129059f59c8e54b994fd4",
"text": "Critical research has emerged as a potentially important stream in information systems research, yet the nature and methods of critical research are still in need of clarification. While criteria or principles for evaluating positivist and interpretive research have been widely discussed, criteria or principles for evaluating critical social research are lacking. Therefore, the purpose of this paper is to propose a set of principles for the conduct of critical research. This paper has been accepted for publication in MIS Quarterly and follows on from an earlier piece that suggested a set of principles for interpretive research (Klein and Myers, 1999). The co-author of this paper is Heinz Klein.",
"title": ""
},
{
"docid": "7cfd90a3c9091c296e621ff34fc471e6",
"text": "The study aimed to develop machine learning models that have strong prediction power and interpretability for diagnosis of glaucoma based on retinal nerve fiber layer (RNFL) thickness and visual field (VF). We collected various candidate features from the examination of retinal nerve fiber layer (RNFL) thickness and visual field (VF). We also developed synthesized features from original features. We then selected the best features proper for classification (diagnosis) through feature evaluation. We used 100 cases of data as a test dataset and 399 cases of data as a training and validation dataset. To develop the glaucoma prediction model, we considered four machine learning algorithms: C5.0, random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN). We repeatedly composed a learning model using the training dataset and evaluated it by using the validation dataset. Finally, we got the best learning model that produces the highest validation accuracy. We analyzed quality of the models using several measures. The random forest model shows best performance and C5.0, SVM, and KNN models show similar accuracy. In the random forest model, the classification accuracy is 0.98, sensitivity is 0.983, specificity is 0.975, and AUC is 0.979. The developed prediction models show high accuracy, sensitivity, specificity, and AUC in classifying among glaucoma and healthy eyes. It will be used for predicting glaucoma against unknown examination records. Clinicians may reference the prediction results and be able to make better decisions. We may combine multiple learning models to increase prediction accuracy. The C5.0 model includes decision rules for prediction. It can be used to explain the reasons for specific predictions.",
"title": ""
},
{
"docid": "4ab3db4b0c338dbe8d5bb9e1f49f2a5c",
"text": "BACKGROUND\nSub-Saharan African (SSA) countries are currently experiencing one of the most rapid epidemiological transitions characterized by increasing urbanization and changing lifestyle factors. This has resulted in an increase in the incidence of non-communicable diseases, especially cardiovascular disease (CVD). This double burden of communicable and chronic non-communicable diseases has long-term public health impact as it undermines healthcare systems.\n\n\nPURPOSE\nThe purpose of this paper is to explore the socio-cultural context of CVD risk prevention and treatment in sub-Saharan Africa. We discuss risk factors specific to the SSA context, including poverty, urbanization, developing healthcare systems, traditional healing, lifestyle and socio-cultural factors.\n\n\nMETHODOLOGY\nWe conducted a search on African Journals On-Line, Medline, PubMed, and PsycINFO databases using combinations of the key country/geographic terms, disease and risk factor specific terms such as \"diabetes and Congo\" and \"hypertension and Nigeria\". Research articles on clinical trials were excluded from this overview. Contrarily, articles that reported prevalence and incidence data on CVD risk and/or articles that report on CVD risk-related beliefs and behaviors were included. Both qualitative and quantitative articles were included.\n\n\nRESULTS\nThe epidemic of CVD in SSA is driven by multiple factors working collectively. Lifestyle factors such as diet, exercise and smoking contribute to the increasing rates of CVD in SSA. Some lifestyle factors are considered gendered in that some are salient for women and others for men. For instance, obesity is a predominant risk factor for women compared to men, but smoking still remains mostly a risk factor for men. Additionally, structural and system level issues such as lack of infrastructure for healthcare, urbanization, poverty and lack of government programs also drive this epidemic and hampers proper prevention, surveillance and treatment efforts.\n\n\nCONCLUSION\nUsing an African-centered cultural framework, the PEN3 model, we explore future directions and efforts to address the epidemic of CVD risk in SSA.",
"title": ""
},
{
"docid": "267ee2186781941c1f9964afd07a956c",
"text": "Considerations in applying circuit breaker protection to DC systems are capacitive discharge, circuit breaker coordination and impacts of double ground faults. Test and analysis results show the potential for equipment damage. Solutions are proposed at the cost of increased integration between power conversion and protection systems.",
"title": ""
},
{
"docid": "314e10ba42a13a84b40a1b0367bd556e",
"text": "How do users behave in online chatrooms, where they instantaneously read and write posts? We analyzed about 2.5 million posts covering various topics in Internet relay channels, and found that user activity patterns follow known power-law and stretched exponential distributions, indicating that online chat activity is not different from other forms of communication. Analysing the emotional expressions (positive, negative, neutral) of users, we revealed a remarkable persistence both for individual users and channels. I.e. despite their anonymity, users tend to follow social norms in repeated interactions in online chats, which results in a specific emotional \"tone\" of the channels. We provide an agent-based model of emotional interaction, which recovers qualitatively both the activity patterns in chatrooms and the emotional persistence of users and channels. While our assumptions about agent's emotional expressions are rooted in psychology, the model allows to test different hypothesis regarding their emotional impact in online communication.",
"title": ""
},
{
"docid": "c74290691708a5ef66209369c8a377af",
"text": "Network traffic has traditionally exhibited temporal locality in the header field of packets. Such locality is intuitive and is a consequence of the semantics of network protocols. However, in contrast, the locality in the packet payload has not been studied in significant detail. In this work we study temporal locality in the packet payload. Temporal locality can also be viewed as redundancy, and we observe significant redundancy in the packet payload. We investigate mechanisms to exploit it in a networking application. We choose Intrusion Detection Systems (IDS) as a case study. An IDS like the popular Snort operates by scanning packet payload for known attack strings. It first builds a Finite State Machine (FSM) from a database of attack strings, and traverses this FSM using bytes from the packet payload. So temporal locality in network traffic provides us an opportunity to accelerate this FSM traversal. Our mechanism dynamically identifies redundant bytes in the packet and skips their redundant FSM traversal. We further parallelize our mechanism by performing the redundancy identification concurrently with stages of Snort packet processing. IDS are commonly deployed in commodity processors, and we evaluate our mechanism on an Intel Core i3. Our performance study indicates that the length of the redundant chunk is a key factor in performance. We also observe important performance benefits in deploying our redundancy-aware mechanism in the Snort IDS[32].",
"title": ""
},
{
"docid": "b3f9c598719f71d87be372604c0d42d4",
"text": "Vulnerability assessment is the essential and well-established process of probing security flaws, weaknesses and inadequacies in a computing infrastructure. The process helps organisations to eliminate security issues before attackers can exploit them formonetary gains or othermalicious purposes. The significant advancements in desktop, Web and mobile computing technologies have widened the range of security-related complications. It has become an increasingly crucial challenge for security analysts to devise comprehensive security evaluation and mitigation tools that can protect the business-critical operations. Researchers have proposed a variety of methods for vulnerability assessment, which can be broadly categorised into manual, assistive and fully automated.Manual vulnerability assessment is performed by a human expert, based on a specific set of instructions that are aimed at finding the security vulnerability. This method requires a large amount of time, effort and resources, and it is heavily reliant on expert knowledge, something that is widely attributed to being in short supply. The assistive vulnerability assessment is conducted with the help of scanning tools or frameworks that are usually up-to-date and look for the most relevant security weakness. However, the lack of flexibility, compatibility and regular maintenance of tools, as they contain static knowledge, renders them outdated and does not provide the beneficial information (in terms of depth and scope of tests) about the state of security. Fully automated vulnerability assessment leverages artificial intelligence techniques to produce expert-like decisionswithout human assistance and is by far considered as the most desirable (due to time and financial reduction for the end-user) method of evaluating a systems’ security. Although being highly desirable, such techniques require additional research in improving automated knowledge acquisition, representation and learning mechanisms. Further research is also needed to develop automated vulnerability mitigation techniques that are capable of actually securing the computing platform. The volume of research being S. Khan (B) · S. Parkinson Department of Computer Science, University of Huddersfield, Huddersfield, UK e-mail: [email protected] S. Parkinson e-mail: [email protected] © Springer International Publishing AG, part of Springer Nature 2018 S. Parkinson et al. (eds.), Guide to Vulnerability Analysis for Computer Networks and Systems, Computer Communications and Networks, https://doi.org/10.1007/978-3-319-92624-7_1 3 4 S. Khan and S. Parkinson performed into the use of artificial intelligence techniques in vulnerability assessment is increasing, and there is a need to provide a survey into the state of the art.",
"title": ""
},
{
"docid": "7dec4f1b872b6092bd1c050ec5aa07a9",
"text": "Predictive models based on machine learning can be highly sensitive to data error. Training data are often combined from a variety of different sources, each susceptible to different types of inconsistencies, and as new data stream in during prediction time, the model may encounter previously unseen inconsistencies. An important class of such inconsistencies are domain value violations that occur when an attribute value is outside of an allowed domain. We explore automatically detecting and repairing such violations by leveraging the often available clean test labels to determine whether a given detection and repair combination will improve model accuracy. We present BoostClean which automatically selects an ensemble of error detection and repair combinations using statistical boosting. BoostClean selects this ensemble from an extensible library that is pre-populated general detection functions, including a novel detector based on the Word2Vec deep learning model, which detects errors across a diverse set of domains. Our evaluation on a collection of 12 datasets from Kaggle, the UCI repository, realworld data analyses, and production datasets that show that BoostClean can increase absolute prediction accuracy by up to 9% over the best non-ensembled alternatives. Our optimizations including parallelism, materialization, and indexing techniques show a 22.2× end-to-end speedup on a 16-core machine.",
"title": ""
},
{
"docid": "b14a77c6e663af1445e466a3e90d4e5f",
"text": "This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from humantranslated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-ofthe-art Transformer on English-German and Chinese-English translation tasks.",
"title": ""
},
{
"docid": "ea453548c28d293ba45fca16e5fe0667",
"text": "In this paper, novel mechanical mechanism and control method using SEA are suggested for natural gait pattern and effective gait rehabilitation. Conventional body weight support system (BWS) focuses on static position lifting and has limitation for a subject to constrain natural gait pattern especially in the vertical and medial-lateral movement. To overcome such abnormal gait pattern during BWS rehabilitation, modal force and torque control using wire is applied in this paper. The control makes it possible for a subject to generate natural gait motion which is natural in the vertical and medial-lateral movement during gait rehabilitation. To implement modal force and torque control, reference transformation for control of the tension of two wire is suggested. To control the tension of each wire, SEA which is controlled by the model base is utilized. And experiments are performed to assess the control performance.",
"title": ""
},
{
"docid": "c06e1491b0aabbbd73628c2f9f45d65d",
"text": "With the integration of deep learning into the traditional field of reinforcement learning in the recent decades, the spectrum of applications that artificial intelligence caters is currently very broad. As using AI to play games is a traditional application of reinforcement learning, the project’s objective is to implement a deep reinforcement learning agent that can defeat a video game. Since it is often difficult to determine which algorithms are appropriate given the wide selection of state-of-the-art techniques in the discipline, proper comparisons and investigations of the algorithms are a prerequisite to implementing such an agent. As a result, this paper serves as a platform for exploring the possibility and effectiveness of using conventional state-of-the-art reinforcement learning methods for playing Pacman maps. In particular, this paper demonstrates that Combined DQN, a variation of Rainbow DQN, is able to attain high performance in small maps such as 506Pacman, smallGrid and mediumGrid. It was also demonstrated that the trained agents could also play Pacman maps similar to training with limited performance. Nevertheless, the algorithm suffers due to its data inefficiency and lack of human-like features, which may be remedied in the future by introducing more human-like features into the algortihm, such as intrinsic motivation and imagination.",
"title": ""
},
{
"docid": "b410ff81fdef122597de7b4cdf5d7d4d",
"text": "Since its introduction, frequent-pattern mining has been the subject of numerous studies, including incremental updating. Many existing incremental mining algorithms are Apriori-based, which are not easily adoptable to FP-tree-based frequent-pattern mining. In this paper, we propose a novel tree structure, called CanTree (canonical-order tree), that captures the content of the transaction database and orders tree nodes according to some canonical order. By exploiting its nice properties, the CanTree can be easily maintained when database transactions are inserted, deleted, and/or modified. For example, the CanTree does not require adjustment, merging, and/or splitting of tree nodes during maintenance. No rescan of the entire updated database or reconstruction of a new tree is needed for incremental updating. Experimental results show the effectiveness of our CanTree in the incremental mining of frequent patterns. Moreover, the applicability of CanTrees is not confined to incremental mining; CanTrees can also be applicable to other frequent-pattern mining tasks including constrained mining and interactive mining.",
"title": ""
}
] |
scidocsrr
|
a90305fe2de2b724db6df7f9b18d9fc2
|
Detect-to-Retrieve: Efficient Regional Aggregation for Image Search
|
[
{
"docid": "fb7c268419d798587e1675a5a1a37232",
"text": "Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image reranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets.",
"title": ""
}
] |
[
{
"docid": "624ddac45b110bc809db198d60f3cf97",
"text": "Poisson regression models provide a standard framework for the analysis of count data. In practice, however, count data are often overdispersed relative to the Poisson distribution. One frequent manifestation of overdispersion is that the incidence of zero counts is greater than expected for the Poisson distribution and this is of interest because zero counts frequently have special status. For example, in counting disease lesions on plants, a plant may have no lesions either because it is resistant to the disease, or simply because no disease spores have landed on it. This is the distinction between structural zeros, which are inevitable, and sampling zeros, which occur by chance. In recent years there has been considerable interest in models for count data that allow for excess zeros, particularly in the econometric literature. These models complement more conventional models for overdispersion that concentrate on modelling the variance-mean relationship correctly. Application areas are diverse and have included manufacturing defects (Lambert, 1992), patent applications (Crepon & Duguet, 1997), road safety (Miaou, 1994), species abundance (Welsh et al., 1996; Faddy, 1998), medical consultations",
"title": ""
},
{
"docid": "351bacafe348cf235dc24e2925e71992",
"text": "Dengue, chikungunya, and Zika virus epidemics transmitted by Aedes aegypti mosquitoes have recently (re)emerged and spread throughout the Americas, Southeast Asia, the Pacific Islands, and elsewhere. Understanding how environmental conditions affect epidemic dynamics is critical for predicting and responding to the geographic and seasonal spread of disease. Specifically, we lack a mechanistic understanding of how seasonal variation in temperature affects epidemic magnitude and duration. Here, we develop a dynamic disease transmission model for dengue virus and Aedes aegypti mosquitoes that integrates mechanistic, empirically parameterized, and independently validated mosquito and virus trait thermal responses under seasonally varying temperatures. We examine the influence of seasonal temperature mean, variation, and temperature at the start of the epidemic on disease dynamics. We find that at both constant and seasonally varying temperatures, warmer temperatures at the start of epidemics promote more rapid epidemics due to faster burnout of the susceptible population. By contrast, intermediate temperatures (24-25°C) at epidemic onset produced the largest epidemics in both constant and seasonally varying temperature regimes. When seasonal temperature variation was low, 25-35°C annual average temperatures produced the largest epidemics, but this range shifted to cooler temperatures as seasonal temperature variation increased (analogous to previous results for diurnal temperature variation). Tropical and sub-tropical cities such as Rio de Janeiro, Fortaleza, and Salvador, Brazil; Cali, Cartagena, and Barranquilla, Colombia; Delhi, India; Guangzhou, China; and Manila, Philippines have mean annual temperatures and seasonal temperature ranges that produced the largest epidemics. However, more temperate cities like Shanghai, China had high epidemic suitability because large seasonal variation offset moderate annual average temperatures. By accounting for seasonal variation in temperature, the model provides a baseline for mechanistically understanding environmental suitability for virus transmission by Aedes aegypti. Overlaying the impact of human activities and socioeconomic factors onto this mechanistic temperature-dependent framework is critical for understanding likelihood and magnitude of outbreaks.",
"title": ""
},
{
"docid": "18233af1857390bff51d2e713bc766d9",
"text": "Name disambiguation is a perennial challenge for any large and growing dataset but is particularly significant for scientific publication data where documents and ideas are linked through citations and depend on highly accurate authorship. Differentiating personal names in scientific publications is a substantial problem as many names are not sufficiently distinct due to the large number of researchers active in most academic disciplines today. As more and more documents and citations are published every year, any system built on this data must be continually retrained and reclassified to remain relevant and helpful. Recently, some incremental learning solutions have been proposed, but most of these have been limited to small-scale simulations and do not exhibit the full heterogeneity of the millions of authors and papers in real world data. In our work, we propose a probabilistic model that simultaneously uses a rich set of metadata and reduces the amount of pairwise comparisons needed for new articles. We suggest an approach to disambiguation that classifies in an incremental fashion to alleviate the need for retraining the model and re-clustering all papers and uses fewer parameters than other algorithms. Using a published dataset, we obtained the highest K-measure which is a geometric mean of cluster and author-class purity. Moreover, on a difficult author block from the Clarivate Analytics Web of Science, we obtain higher precision than other algorithms.",
"title": ""
},
{
"docid": "cc0a33df704f4f25d55c18cc0fca1124",
"text": "The underlying assumption in popular and scientific publications on sex differences in the brain is that human brains can take one of two forms \"male\" or \"female,\" and that the differences between these two forms underlie differences between men and women in personality, cognition, emotion, and behavior. Documented sex differences in brain structure are typically taken to support this dimorphic view of the brain. However, neuroanatomical data reveal that sex interacts with other factors in utero and throughout life to determine the structure of the brain, and that because these interactions are complex, the result is a multi-morphic, rather than a dimorphic, brain. More specifically, here I argue that human brains are composed of an ever-changing heterogeneous mosaic of \"male\" and \"female\" brain characteristics (rather than being all \"male\" or all \"female\") that cannot be aligned on a continuum between a \"male brain\" and a \"female brain.\" I further suggest that sex differences in the direction of change in the brain mosaic following specific environmental events lead to sex differences in neuropsychiatric disorders.",
"title": ""
},
{
"docid": "b6c2490fb82289d17092686a7338bbed",
"text": "Effective task management is essential to successful team collaboration. While the past decade has seen considerable innovation in systems that track and manage group tasks, these innovations have typically been outside of the principal communication channels: email, instant messenger, and group chat. Teams formulate, discuss, refine, assign, and track the progress of their collaborative tasks over electronic communication channels, yet they must leave these channels to update their task-tracking tools, creating a source of friction and inefficiency. To address this problem, we explore how bots might be used to mediate task management for individuals and teams. We deploy a prototype bot to eight different teams of information workers to help them create, assign, and keep track of tasks, all within their main communication channel. We derived seven insights for the design of future bots for coordinating work.",
"title": ""
},
{
"docid": "448285428c6b6cfca8c2937d8393eee5",
"text": "Swarm robotics is a novel approach to the coordination of large numbers of robots and has emerged as the application of swarm intelligence to multi-robot systems. Different from other swarm intelligence studies, swarm robotics puts emphases on the physical embodiment of individuals and realistic interactions among the individuals and between the individuals and the environment. In this chapter, we present a brief review of this new approach. We first present its definition, discuss the main motivations behind the approach, as well as its distinguishing characteristics and major coordination mechanisms. Then we present a brief review of swarm robotics research along four axes; namely design, modelling and analysis, robots and problems.",
"title": ""
},
{
"docid": "b5b4e637065ba7c0c18a821bef375aea",
"text": "The new era of mobile health ushered in by the wide adoption of ubiquitous computing and mobile communications has brought opportunities for governments and companies to rethink their concept of healthcare. Simultaneously, the worldwide urbanization process represents a formidable challenge and attracts attention toward cities that are expected to gather higher populations and provide citizens with services in an efficient and human manner. These two trends have led to the appearance of mobile health and smart cities. In this article we introduce the new concept of smart health, which is the context-aware complement of mobile health within smart cities. We provide an overview of the main fields of knowledge that are involved in the process of building this new concept. Additionally, we discuss the main challenges and opportunities that s-Health would imply and provide a common ground for further research.",
"title": ""
},
{
"docid": "c03ae003e3fd6503822480267108e2a6",
"text": "A relatively simple model of the phonological loop (A. D. Baddeley, 1986), a component of working memory, has proved capable of accommodating a great deal of experimental evidence from normal adult participants, children, and neuropsychological patients. Until recently, however, the role of this subsystem in everyday cognitive activities was unclear. In this article the authors review studies of word learning by normal adults and children, neuropsychological patients, and special developmental populations, which provide evidence that the phonological loop plays a crucial role in learning the novel phonological forms of new words. The authors propose that the primary purpose for which the phonological loop evolved is to store unfamiliar sound patterns while more permanent memory records are being constructed. Its use in retaining sequences of familiar words is, it is argued, secondary.",
"title": ""
},
{
"docid": "e022bcb002e2c851e697972a49c3e417",
"text": "A polymer membrane-coated palladium (Pd) nanoparticle (NP)/single-layer graphene (SLG) hybrid sensor was fabricated for highly sensitive hydrogen gas (H2) sensing with gas selectivity. Pd NPs were deposited on SLG via the galvanic displacement reaction between graphene-buffered copper (Cu) and Pd ion. During the galvanic displacement reaction, graphene was used as a buffer layer, which transports electrons from Cu for Pd to nucleate on the SLG surface. The deposited Pd NPs on the SLG surface were well-distributed with high uniformity and low defects. The Pd NP/SLG hybrid was then coated with polymer membrane layer for the selective filtration of H2. Because of the selective H2 filtration effect of the polymer membrane layer, the sensor had no responses to methane, carbon monoxide, or nitrogen dioxide gas. On the contrary, the PMMA/Pd NP/SLG hybrid sensor exhibited a good response to exposure to 2% H2: on average, 66.37% response within 1.81 min and recovery within 5.52 min. In addition, reliable and repeatable sensing behaviors were obtained when the sensor was exposed to different H2 concentrations ranging from 0.025 to 2%.",
"title": ""
},
{
"docid": "62ba312d26ffbbfdd52130c08031905f",
"text": "The effects of intravascular laser irradiation of blood (ILIB), with 405 and 632.8 nm on serum blood sugar (BS) level, were comparatively studied. Twenty-four diabetic type 2 patients received 14 sessions of ILIB with blue and red lights. BS was measured before and after therapy. Serum BS decreased highly significant after ILIB with both red and blue lights (p < 0.0001), but we did not find significant difference between red and blue lights. The ILIB effect would be of benefit in the clinical treatment of diabetic type 2 patients, irrespective of lasers (blue or red lights) that are used.",
"title": ""
},
{
"docid": "c841938f03a07fffc5150fbe18f8f740",
"text": "Ensemble modeling is now a well-established means for improving prediction accuracy; it enables you to average out noise from diverse models and thereby enhance the generalizable signal. Basic stacked ensemble techniques combine predictions from multiple machine learning algorithms and use these predictions as inputs to second-level learning models. This paper shows how you can generate a diverse set of models by various methods such as forest, gradient boosted decision trees, factorization machines, and logistic regression and then combine them with stacked-ensemble techniques such as hill climbing, gradient boosting, and nonnegative least squares in SAS Visual Data Mining and Machine Learning. The application of these techniques to real-world big data problems demonstrates how using stacked ensembles produces greater prediction accuracy and robustness than do individual models. The approach is powerful and compelling enough to alter your initial data mining mindset from finding the single best model to finding a collection of really good complementary models. It does involve additional cost due both to training a large number of models and the proper use of cross validation to avoid overfitting. This paper shows how to efficiently handle this computational expense in a modern SAS environment and how to manage an ensemble workflow by using parallel computation in a distributed framework.",
"title": ""
},
{
"docid": "dc83550afd690e371283428647ed806e",
"text": "Recently, convolutional neural networks have demonstrated excellent performance on various visual tasks, including the classification of common two-dimensional images. In this paper, deep convolutional neural networks are employed to classify hyperspectral images directly in spectral domain. More specifically, the architecture of the proposed classifier contains five layers with weights which are the input layer, the convolutional layer, the max pooling layer, the full connection layer, and the output layer. These five layers are implemented on each spectral signature to discriminate against others. Experimental results based on several hyperspectral image data sets demonstrate that the proposed method can achieve better classification performance than some traditional methods, such as support vector machines and the conventional deep learning-based methods.",
"title": ""
},
{
"docid": "e44b05d7a4a2979168b876c9cdd8f573",
"text": "The network architecture of the human brain has become a feature of increasing interest to the neuroscientific community, largely because of its potential to illuminate human cognition, its variation over development and aging, and its alteration in disease or injury. Traditional tools and approaches to study this architecture have largely focused on single scales-of topology, time, and space. Expanding beyond this narrow view, we focus this review on pertinent questions and novel methodological advances for the multi-scale brain. We separate our exposition into content related to multi-scale topological structure, multi-scale temporal structure, and multi-scale spatial structure. In each case, we recount empirical evidence for such structures, survey network-based methodological approaches to reveal these structures, and outline current frontiers and open questions. Although predominantly peppered with examples from human neuroimaging, we hope that this account will offer an accessible guide to any neuroscientist aiming to measure, characterize, and understand the full richness of the brain's multiscale network structure-irrespective of species, imaging modality, or spatial resolution.",
"title": ""
},
{
"docid": "a0566ac90d164db763c7efa977d4bc0d",
"text": "Dead-time controls for synchronous buck converter are challenging due to the difficulties in accurate sensing and processing the on/off dead-time errors. For the control of dead-times, an integral feedback control using switched capacitors and a fast timing sensing circuit composed of MOSFET differential amplifiers and switched current sources are proposed. Experiments for a 3.3 V input, 1.5 V-0.3 A output converter demonstrated 1.3 ~ 4.6% efficiency improvement over a wide load current range.",
"title": ""
},
{
"docid": "0e1241d2136891c0623e370d12e7b127",
"text": "Within the CSCW community, little has been done to systematically analyze online eating disorder (ED) user generated content. In this paper, we present the results of a cross-platform content analysis of ED-related posts. We analyze the way that hashtags are used in ad-hoc ED- focused networks and present a comprehensive corpus of ED-terminology that frequently accompanies ED activities online. We provide exemplars of the types of ED-related content found online. Through this characterization of activities, we draw attention to the increasingly important role that these platforms play and how they are used and misappropriated for negative health purposes. We also outline specific challenges associated with researching these types of networks online. CAUTION: This paper includes media that could potentially be a trigger to those dealing with an eating disorder or with other self-injury illnesses. Please use caution when reading, printing, or disseminating this paper.",
"title": ""
},
{
"docid": "17fd98d5ea7fbc6ba8fce4dccdbb7fc6",
"text": "Knowledge compilation algorithms transform a probabilistic logic program into a circuit representation that permits efficient probability computation. Knowledge compilation underlies algorithms for exact probabilistic inference and parameter learning in several languages, including ProbLog, PRISM, and LPADs. Developing such algorithms involves a choice, of which circuit language to target, and which compilation algorithm to use. Historically, Binary Decision Diagrams (BDDs) have been a popular target language, whereas recently, deterministicDecomposable Negation Normal Form (d-DNNF) circuits were shown to outperform BDDs on these tasks. We investigate the use of a new language, called Sentential Decision Diagrams (SDDs), for inference in probabilistic logic programs. SDDs combine desirable properties of BDDs and d-DNNFs. Like BDDs, they support bottom-up compilation and circuit minimization, yet they are a more general and flexible representation. Our preliminary experiments show that compilation to SDD yields smaller circuits and more scalable inference, outperforming the state of the art in ProbLog inference.",
"title": ""
},
{
"docid": "308cb8104e4b5fbf6bedfd28fec68ea6",
"text": "This paper presents a novel method for visual-inertial odometry. The method is based on an information fusion framework employing low-cost IMU sensors and the monocular camera in a standard smartphone. We formulate a sequential inference scheme, where the IMU drives the dynamical model and the camera frames are used in coupling trailing sequences of augmented poses. The novelty in the model is in taking into account all the cross-terms in the updates, thus propagating the inter-connected uncertainties throughout the model. Stronger coupling between the inertial and visual data sources leads to robustness against occlusion and feature-poor environments. We demonstrate results on data collected with an iPhone and provide comparisons against the Tango device and using the EuRoC data set.",
"title": ""
},
{
"docid": "8966d588d11eac49f4cc98e70f7333e6",
"text": "The timeliness and synchronization requirements of multimedia data demand e&ient buffer management and disk access schemes for multimedia database systems. The data rates involved are very high and despite the developmenl of eficient storage and retrieval strategies, disk I/O is a potential bottleneck, which limits the number of concurrent sessions supported by a system. This calls for more eficient use of data that has already been brought into the buffer. We introduce the notion of continuous media caching, which is a simple and novel technique where data that have been played back by a user are preserved in a controlled fashion for use by subsequent users requesting the same data. We present heuristics to determine when continuous media sharing is beneficial and describe the bufler management algorithms. Simulation studies indicate that our technique substantially improves the performance of multimedia database applications where data sharing is possible.",
"title": ""
},
{
"docid": "4bce6150e9bc23716a19a0d7c02640c0",
"text": "A Data Mining Framework for Constructing Features and Models for Intrusion Detection Systems",
"title": ""
},
{
"docid": "e62ad0c67fa924247f05385bda313a38",
"text": "Artificial neural networks have been recognized as a powerful tool for pattern classification problems, but a number of researchers have also suggested that straightforward neural-network approaches to pattern recognition are largely inadequate for difficult problems such as handwritten numeral recognition. In this paper, we present three sophisticated neural-network classifiers to solve complex pattern recognition problems: multiple multilayer perceptron (MLP) classifier, hidden Markov model (HMM)/MLP hybrid classifier, and structure-adaptive self-organizing map (SOM) classifier. In order to verify the superiority of the proposed classifiers, experiments were performed with the unconstrained handwritten numeral database of Concordia University, Montreal, Canada. The three methods have produced 97.35%, 96.55%, and 96.05% of the recognition rates, respectively, which are better than those of several previous methods reported in the literature on the same database.",
"title": ""
}
] |
scidocsrr
|
e92f7bbbadbc5140b95797cadbe07701
|
ATHENA: An Ontology-Driven System for Natural Language Querying over Relational Data Stores
|
[
{
"docid": "5b89c42eb7681aff070448bc22e501ea",
"text": "DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm.",
"title": ""
}
] |
[
{
"docid": "b215d3604e19c7023049c082b10d7aac",
"text": "In this paper, we discuss how we can extend probabilistic topic models to analyze the relationship graph of popular social-network data, so that we can group or label the edges and nodes in the graph based on their topic similarity. In particular, we first apply the well-known Latent Dirichlet Allocation (LDA) model and its existing variants to the graph-labeling task and argue that the existing models do not handle popular nodes (nodes with many incoming edges) in the graph very well. We then propose possible extensions to this model to deal with popular nodes. Our experiments show that the proposed extensions are very effective in labeling popular nodes, showing significant improvements over the existing methods. Our proposed methods can be used for providing, for instance, more relevant friend recommendations within a social network.",
"title": ""
},
{
"docid": "1c0b590a687f628cb52d34a37a337576",
"text": "Hexagonal torus networks are special family of Eisenstein-Jacobi (EJ) networks which have gained popularity as good candidates network On-Chip (NoC) for interconnecting Multiprocessor System-on-Chips (MPSoCs). They showed better topological properties compared to the 2D torus networks with the same number of nodes. All-to-all broadcast is a collective communication algorithm used frequently in some parallel applications. Recently, an off-chip all-to-all broadcast algorithm has been proposed for hexagonal torus networks assuming half-duplex links and all-ports communication. The proposed all-to-all broadcast algorithm does not achieve the minimum transmission time and requires 24 kextra buffers, where kis the network diameter. We first extend this work by proposing an efficient all-to-all broadcast on hexagonal torus networks under full-duplex links and all-ports communications assumptions which achieves the minimum transmission delay but requires 36 k extra buffers per router. In a second stage, we develop a new all-to-all broadcast more suitable for hexagonal torus network on-chip that achieves optimal transmission delay time without requiring any extra buffers per router. By reducing the amount of buffer space, the new all-to-all broadcast reduces the routers cost which is an important issue in NoCs architectures.",
"title": ""
},
{
"docid": "debcc046323ffbd9a093c8e07d37960e",
"text": "This review discusses the theory and practical application of independent component analysis (ICA) to multi-channel EEG data. We use examples from an audiovisual attention-shifting task performed by young and old subjects to illustrate the power of ICA to resolve subtle differences between evoked responses in the two age groups. Preliminary analysis of these data using ICA suggests a loss of task specificity in independent component (IC) processes in frontal and somatomotor cortex during post-response periods in older as compared to younger subjects, trends not detected during examination of scalp-channel event-related potential (ERP) averages. We discuss possible approaches to component clustering across subjects and new ways to visualize mean and trial-by-trial variations in the data, including ERP-image plots of dynamics within and across trials as well as plots of event-related spectral perturbations in component power, phase locking, and coherence. We believe that widespread application of these and related analysis methods should bring EEG once again to the forefront of brain imaging, merging its high time and frequency resolution with enhanced cm-scale spatial resolution of its cortical sources.",
"title": ""
},
{
"docid": "e27575b8d7a7455f1a8f941adb306a04",
"text": "Seung-Joon Yi GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: [email protected] Stephen G. McGill GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: [email protected] Larry Vadakedathu GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: [email protected] Qin He GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: [email protected] Inyong Ha Robotis, Seoul, Korea e-mail: [email protected] Jeakweon Han Robotis, Seoul, Korea e-mail: [email protected] Hyunjong Song Robotis, Seoul, Korea e-mail: [email protected] Michael Rouleau RoMeLa, Virginia Tech, Blacksburg, Virginia 24061 e-mail: [email protected] Byoung-Tak Zhang BI Lab, Seoul National University, Seoul, Korea e-mail: [email protected] Dennis Hong RoMeLa, University of California, Los Angeles, Los Angeles, California 90095 e-mail: [email protected] Mark Yim GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: [email protected] Daniel D. Lee GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: [email protected]",
"title": ""
},
{
"docid": "53fcf4f5285b7a93d99d2c222dfe21dd",
"text": "OBJECTIVES\nTo determine whether the use of a near-infrared light venipuncture aid (VeinViewer; Luminetx Corporation, Memphis, Tenn) would improve the rate of successful first-attempt placement of intravenous (IV) catheters in a high-volume pediatric emergency department (ED).\n\n\nMETHODS\nPatients younger than 20 years with standard clinical indications for IV access were randomized to have IV placement by ED nurses (in 3 groups stratified by 5-year blocks of nursing experience) using traditional methods (standard group) or with the aid of the near-infrared light source (device group). If a vein could not be cannulated after 3 attempts, patients crossed over from one study arm to the other, and study nurses attempted placement with the alternative technique. The primary end point was first-attempt success rate for IV catheter placement. After completion of patient enrollment, a questionnaire was completed by study nurses as a qualitative assessment of the device.\n\n\nRESULTS\nA total of 123 patients (median age, 3 years) were included in the study: 62 in the standard group and 61 in the device group. There was no significant difference in first-attempt success rate between the standard (79.0%, 95% confidence interval [CI], 66.8%-88.3%) and device (72.1%, 95% CI, 59.2%-82.9%) groups. Of the 19 study nurses, 14 completed the questionnaire of whom 70% expressed neutral or unfavorable assessments of the device in nondehydrated patients without chronic underlying medical conditions and 90% found the device a helpful tool for patients in whom IV access was difficult.\n\n\nCONCLUSIONS\nFirst-attempt success rate for IV placement was nonsignificantly higher without than with the assistance of a near-infrared light device in a high-volume pediatric ED. Nurses placing IVs did report several benefits to use of the device with specific patient groups, and future research should be conducted to demonstrate the role of the device in these patients.",
"title": ""
},
{
"docid": "bcf27c4f750ab74031b8638a9b38fd87",
"text": "δ opioid receptor (DOR) was the first opioid receptor of the G protein‑coupled receptor family to be cloned. Our previous studies demonstrated that DOR is involved in regulating the development and progression of human hepatocellular carcinoma (HCC), and is involved in the regulation of the processes of invasion and metastasis of HCC cells. However, whether DOR is involved in the development and progression of drug resistance in HCC has not been reported and requires further elucidation. The aim of the present study was to investigate the expression levels of DOR in the drug‑resistant HCC BEL‑7402/5‑fluorouracil (BEL/FU) cell line, and its effects on drug resistance, in order to preliminarily elucidate the effects of DOR in HCC drug resistance. The results of the present study demonstrated that DOR was expressed at high levels in the BEL/FU cells, and the expression levels were higher, compared with those in normal liver cells. When the expression of DOR was silenced, the proliferation of the drug‑resistant HCC cells were unaffected. However, when the cells were co‑treated with a therapeutic dose of 5‑FU, the proliferation rate of the BEL/FU cells was significantly inhibited, a large number of cells underwent apoptosis, cell cycle progression was arrested and changes in the expression levels of drug‑resistant proteins were observed. Overall, the expression of DOR was upregulated in the drug‑resistant HCC cells, and its functional status was closely associated with drug resistance in HCC. Therefore, DOR may become a recognized target molecule with important roles in the clinical treatment of drug‑resistant HCC.",
"title": ""
},
{
"docid": "a701b681b5fb570cf8c0668fe691ee15",
"text": "Coagulation-flocculation is a relatively simple physical-chemical technique in treatment of old and stabilized leachate which has been practiced using a variety of conventional coagulants. Polymeric forms of metal coagulants which are increasingly applied in water treatment are not well documented in leachate treatment. In this research, capability of poly-aluminum chloride (PAC) in the treatment of stabilized leachate from Pulau Burung Landfill Site (PBLS), Penang, Malaysia was studied. The removal efficiencies for chemical oxygen demand (COD), turbidity, color and total suspended solid (TSS) obtained using PAC were compared with those obtained using alum as a conventional coagulant. Central composite design (CCD) and response surface method (RSM) were applied to optimize the operating variables viz. coagulant dosage and pH. Quadratic models developed for the four responses (COD, turbidity, color and TSS) studied indicated the optimum conditions to be PAC dosage of 2g/L at pH 7.5 and alum dosage of 9.5 g/L at pH 7. The experimental data and model predictions agreed well. COD, turbidity, color and TSS removal efficiencies of 43.1, 94.0, 90.7, and 92.2% for PAC, and 62.8, 88.4, 86.4, and 90.1% for alum were demonstrated.",
"title": ""
},
{
"docid": "6c720d68e8cea8f4c1fc17006af464cd",
"text": "In this paper, a high-range 60-GHz monostatic transceiver system suitable for frequency-modulated continuous-wave (FMCW) applications is presented. The RF integrated circuit is fabricated using a 0.13-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> SiGe BiCMOS technology with <inline-formula> <tex-math notation=\"LaTeX\">$f_{T}$ </tex-math></inline-formula>/<inline-formula> <tex-math notation=\"LaTeX\">$f_{\\max }$ </tex-math></inline-formula> of 250/340 GHz and occupies a very compact area of <inline-formula> <tex-math notation=\"LaTeX\">$1.42 \\times 0.72$ </tex-math></inline-formula> mm<sup>2</sup>. All of the internal blocks are designed fully differential with an in-phase/quadrature receiver (RX) conversion gain of 14.8 dB and −18.2 dBm of input-referred 1-dB compression point and a transmitter (TX) with 6.4 dBm of output power. The 60-GHz voltage-controlled oscillator is of a push-push type Colpitts oscillator integrated into a frequency divider with an output frequency between 910 MHz and 1 GHz with the help of 3-bit frequency tuning mechanism for external phase-locked loop operations. Between the TX and RX channels, a tunable coupler is placed to guarantee a high isolation between channels which could withstand any fabrication failures and provide a single differential antenna output. On the TX side, two power detectors are placed in order to monitor the transmitted and reflected powers on the TX channel by passing through a branch-line coupler for built-in-self-test purposes. The total current consumption of this transceiver is 156 mA at 3.3 V of single supply. Considering the successful real-time radar measurements, which the radar is able to detect the objects in more than 90-m range, it proves the suitability of this monostatic chip in high-range FMCW radar systems.",
"title": ""
},
{
"docid": "2a99b3123b80a3d0527349ee93b3fca5",
"text": "Information explosion that can be generated by anyone may lead to the spread of fake news not only at the news channel, but also at social media, and so forth. Detection of fake news has become an urgent need on the society because of fake news spread of unrest in the society. Several related studies have been conducted in the news classification with the aim of providing a decision whether a news is included in fake news or original news. In the related research, a vector representation of documents is used. This vector representation is then given to the algorithm for further processing. This study aims to model vectors that can accommodate the characteristics of fake news before further processed by language algorithms using the Indonesian language. In this research, fake news and original news are represented according to the vector space model. Vector model combination of frequency term, inverse document frequency and frequency reversed with 10-fold cross validation using support vector machine algorithm classifier. Variations of phrase detection as well as name recognition entities (entity recognition names) are also used in vector representation. A vector representation that uses the term frequency shows promising performance. It can recognize news characteristics correctly 96.74% of 2516 documents across phrase detection and named entity recognition process.",
"title": ""
},
{
"docid": "21a917abee792625539e7eabb3a81f4c",
"text": "This paper investigates the power operation in information system development (ISD) processes. Due to the fact that key actors in different departments possess different professional knowledge, their different contexts lead to some employees supporting IS, while others resist it to achieve their goals. We aim to interpret these power operations in ISD from the theory of technological frames. This study is based on qualitative data collected from KaoKang (pseudonym), a port authority in Taiwan. We attempt to understand the situations of different key actors (e.g. top manager, MIS professionals, employees of DP-1 division, consultants of KaoKang, and customers (outside users)) who wield power in ISD in different situations. In this respect, we interpret the data using a technological frame. Finally, we aim to gain fresh insight into power operation in ISD from this perspective.",
"title": ""
},
{
"docid": "469e3a398e0d2772467fd14e5dd44d8b",
"text": "We present a method for simultaneously recovering shape and spatially varying reflectance of a surface from photometric stereo images. The distinguishing feature of our approach is its generality; it does not rely on a specific parametric reflectance model and is therefore purely ldquodata-drivenrdquo. This is achieved by employing novel bi-variate approximations of isotropic reflectance functions. By combining this new approximation with recent developments in photometric stereo, we are able to simultaneously estimate an independent surface normal at each point, a global set of non-parametric ldquobasis materialrdquo BRDFs, and per-point material weights. Our experimental results validate the approach and demonstrate the utility of bi-variate reflectance functions for general non-parametric appearance capture.",
"title": ""
},
{
"docid": "d208033e210816d7a9454749080587d9",
"text": "Graph classification is a problem with practical applications in many different domains. Most of the existing methods take the entire graph into account when calculating graph features. In a graphlet-based approach, for instance, the entire graph is processed to get the total count of different graphlets or subgraphs. In the real-world, however, graphs can be both large and noisy with discriminative patterns confined to certain regions in the graph only. In this work, we study the problem of attentional processing for graph classification. The use of attention allows us to focus on small but informative parts of the graph, avoiding noise in the rest of the graph. We present a novel RNN model, called the Graph Attention Model (GAM), that processes only a portion of the graph by adaptively selecting a sequence of “interesting” nodes. The model is equipped with an external memory component which allows it to integrate information gathered from different parts of the graph. We demonstrate the effectiveness of the model through various experiments.",
"title": ""
},
{
"docid": "4689161101a990d17b08e27b3ccf2be3",
"text": "The growth of the software game development industry is enormous and is gaining importance day by day. This growth imposes severe pressure and a number of issues and challenges on the game development community. Game development is a complex process, and one important game development choice is to consider the developer’s perspective to produce good-quality software games by improving the game development process. The objective of this study is to provide a better understanding of the developer’s dimension as a factor in software game success. It focuses mainly on an empirical investigation of the effect of key developer’s factors on the software game development process and eventually on the quality of the resulting game. A quantitative survey was developed and conducted to identify key developer’s factors for an enhanced game development process. For this study, the developed survey was used to test the research model and hypotheses. The results provide evidence that game development organizations must deal with multiple key factors to remain competitive and to handle high pressure in the software game industry. The main contribution of this paper is to investigate empirically the influence of key developer’s factors on the game development process.",
"title": ""
},
{
"docid": "56bad8cef0c8ed0af6882dbc945298ef",
"text": "We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.",
"title": ""
},
{
"docid": "ad68a9ecf4ba36ec924ec22afaafd9f3",
"text": "The convergence rate and final performance of common deep learning models have significantly benefited from heuristics such as learning rate schedules, knowledge distillation, skip connections, and normalization layers. In the absence of theoretical underpinnings, controlled experiments aimed at explaining these strategies can aid our understanding of deep learning landscapes and the training dynamics. Existing approaches for empirical analysis rely on tools of linear interpolation and visualizations with dimensionality reduction, each with their limitations. Instead, we revisit such analysis of heuristics through the lens of recently proposed methods for loss surface and representation analysis, viz., mode connectivity and canonical correlation analysis (CCA), and hypothesize reasons for the success of the heuristics. In particular, we explore knowledge distillation and learning rate heuristics of (cosine) restarts and warmup using mode connectivity and CCA. Our empirical analysis suggests that: (a) the reasons often quoted for the success of cosine annealing are not evidenced in practice; (b) that the effect of learning rate warmup is to prevent the deeper layers from creating training instability; and (c) that the latent knowledge shared by the teacher is primarily disbursed to the deeper layers.",
"title": ""
},
{
"docid": "8db35bf9fd2969c579594e726370700d",
"text": "Wireless Sensor Networks (WSNs), in recent times, have become one of the most promising network solutions with a wide variety of applications in the areas of agriculture, environment, healthcare and the military. Notwithstanding these promising applications, sensor nodes in WSNs are vulnerable to different security attacks due to their deployment in hostile and unattended areas and their resource constraints. One of such attacks is the DoS jamming attack that interferes and disrupts the normal functions of sensor nodes in a WSN by emitting radio frequency signals to jam legitimate signals to cause a denial of service. In this work we propose a step-wise approach using a statistical process control technique to detect these attacks. We deploy an exponentially weighted moving average (EWMA) to detect anomalous changes in the intensity of a jamming attack event by using the packet inter-arrival feature of the received packets from the sensor nodes. Results obtained from a trace-driven simulation show that the proposed solution can efficiently and accurately detect jamming attacks in WSNs with little or no overhead.",
"title": ""
},
{
"docid": "c804aa80440827033fa787723d23c698",
"text": "The present paper analyzes the self-generated explanations (from talk-aloud protocols) that “Good” ond “Poor” students produce while studying worked-out exomples of mechanics problems, and their subsequent reliance on examples during problem solving. We find that “Good” students learn with understanding: They generate many explanations which refine and expand the conditions for the action ports of the exomple solutions, ond relate these actions to principles in the text. These self-explanations are guided by accurate monitoring of their own understanding and misunderstanding. Such learning results in example-independent knowledge and in a better understanding of the principles presented in the text. “Poor” students do not generate sufficient self-explonations, monitor their learning inaccurately, and subsequently rely heovily an examples. We then discuss the role of self-explanations in facilitating problem solving, as well OS the adequacy of current Al models of explanation-based learning to account for these psychological findings.",
"title": ""
},
{
"docid": "e92833a68f85cf909b122880fee7cc80",
"text": "With the increasing amount of videos recorded using 2D mobile cameras, the technique for recovering the 3D dynamic facial models from these monocular videos has become a necessity for many image and video editing applications. While methods based parametric 3D facial models can reconstruct the 3D shape in dynamic environment, large structural changes are ignored. Structure-from-motion methods can reconstruct these changes but assume the object to be static. To address this problem we present a novel method for realtime dynamic 3D facial tracking and reconstruction from videos captured in uncontrolled environments. Our method can track the deforming facial geometry and reconstruct external objects that protrude from the face such as glasses and hair. It also allows users to move around, perform facial expressions freely without degrading the reconstruction quality.",
"title": ""
},
{
"docid": "7ba3f13f58c4b25cc425b706022c1f2b",
"text": "Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast/Faster R-CNN [1,2] have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN [2] for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available.",
"title": ""
},
{
"docid": "17ecf3c7b53e81642cf0cb2d75c2bfb3",
"text": "Serverless computing is widely known as an event-driven cloud execution model. In this model, the client provides the code and the cloud provider manages the life-cycle of the execution environment of that code. The idea is based on reducing the life span of the program to execute functionality in response to an event. Hence, the program's processes are born when an event is triggered and are killed after the event is processed. This model has proved its usefulness in the cloud as it reduced the operational cost and complexity of executing event-driven workloads. In this paper we argue that the serverless model does not have to be limited the to the cloud. We show how the same model can be applied at the micro-level of a single machine. In such model, certain operating system commands are treated as events that trigger a serverless reaction. This reaction consists of deploying and running code only in response to those events. Thus, reducing the attack surface and complexity of managing single machines.",
"title": ""
}
] |
scidocsrr
|
3aa5f0adf39b6dd6062441fa5a3baedf
|
DIMENSION: Dynamic MR Imaging with Both K-space and Spatial Prior Knowledge Obtained via Multi-Supervised Network Training
|
[
{
"docid": "26f965ee6e5de111fa8073707fee6b69",
"text": "Over past several years, machine learning, or more generally artificial intelligence, has generated overwhelming research interest and attracted unprecedented public attention. As tomographic imaging researchers, we share the excitement from our imaging perspective [item 1) in the Appendix], and organized this special issue dedicated to the theme of “Machine learning for image reconstruction.” This special issue is a sister issue of the special issue published in May 2016 of this journal with the theme “Deep learning in medical imaging” [item 2) in the Appendix]. While the previous special issue targeted medical image processing/analysis, this special issue focuses on data-driven tomographic reconstruction. These two special issues are highly complementary, since image reconstruction and image analysis are two of the main pillars for medical imaging. Together we cover the whole workflow of medical imaging: from tomographic raw data/features to reconstructed images and then extracted diagnostic features/readings.",
"title": ""
},
{
"docid": "d54aff38bab1a8877877ddba9e20e88d",
"text": "SiMultaneous Acquisition of Spatial Harmonics (SMASH) is a new fast-imaging technique that increases MR image acquisition speed by an integer factor over existing fast-imaging methods, without significant sacrifices in spatial resolution or signal-to-noise ratio. Image acquisition time is reduced by exploiting spatial information inherent in the geometry of a surface coil array to substitute for some of the phase encoding usually produced by magnetic field gradients. This allows for partially parallel image acquisitions using many of the existing fast-imaging sequences. Unlike the data combination algorithms of prior proposals for parallel imaging, SMASH reconstruction involves a small set of MR signal combinations prior to Fourier transformation, which can be advantageous for artifact handling and practical implementation. A twofold savings in image acquisition time is demonstrated here using commercial phased array coils on two different MR-imaging systems. Larger time savings factors can be expected for appropriate coil designs.",
"title": ""
}
] |
[
{
"docid": "460a296de1bd13378d71ce19ca5d807a",
"text": "Many books discuss applications of data mining. For financial data analysis and financial modeling, see Benninga and Czaczkes [BC00] and Higgins [Hig03]. For retail data mining and customer relationship management, see books by Berry and Linoff [BL04] and Berson, Smith, and Thearling [BST99], and the article by Kohavi [Koh01]. For telecommunication-related data mining, see the book by Mattison [Mat97]. Chen, Hsu, and Dayal [CHD00] reported their work on scalable telecommunication tandem traffic analysis under a data warehouse/OLAP framework. For bioinformatics and biological data analysis, there are a large number of introductory references and textbooks. An introductory overview of bioinformatics for computer scientists was presented by Cohen [Coh04]. Recent textbooks on bioinformatics include Krane and Raymer [KR03], Jones and Pevzner [JP04], Durbin, Eddy, Krogh and Mitchison [DEKM98], Setubal and Meidanis [SM97], Orengo, Jones, and Thornton [OJT03], and Pevzner [Pev03]. Summaries of biological data analysis methods and algorithms can also be found in many other books, such as Gusfield [Gus97], Waterman [Wat95], Baldi and Brunak [BB01], and Baxevanis and Ouellette [BO04]. There are many books on scientific data analysis, such as Grossman, Kamath, Kegelmeyer, et al. (eds.) [GKK01]. For geographic data mining, see the book edited by Miller and Han [MH01]. Valdes-Perez [VP99] discusses the principles of human-computer collaboration for knowledge discovery in science. For intrusion detection, see Barbará [Bar02] and Northcutt and Novak [NN02].",
"title": ""
},
{
"docid": "a0a46f9ec5221b1a6c95bb8c45f1a8a7",
"text": "This paper describes the steps for achieving data processing in a methodological context, which take part of a methodology previously proposed by the authors for developing Data Mining (DM) applications, called \"Methodology for the development of data mining applications based on organizational analysis\". The methodology has three main phases: Knowledge of the Organization, Preparation and treatment of data, and finally, development of the DM application. We will focus on the second phase. The main contribution of this proposal is the design of a methodological framework of the second phase based on the paradigm of Data Science (DS), in order to get what we have called “Vista Minable Operacional” (VMO) from the “Vista Minable Conceptual” (VMC). The VMO built is used in the third phase. This methodological framework has been applied in two different cases of study, oil and public health.",
"title": ""
},
{
"docid": "e0b8b4c2431b92ff878df197addb4f98",
"text": "Malware classification is a critical part of the cybersecurity. Traditional methodologies for the malware classification typically use static analysis and dynamic analysis to identify malware. In this paper, a malware classification methodology based on its binary image and extracting local binary pattern (LBP) features is proposed. First, malware images are reorganized into 3 by 3 grids which are mainly used to extract LBP feature. Second, the LBP is implemented on the malware images to extract features in that it is useful in pattern or texture classification. Finally, Tensorflow, a library for machine learning, is applied to classify malware images with the LBP feature. Performance comparison results among different classifiers with different image descriptors such as GIST, a spatial envelope, and the LBP demonstrate that our proposed approach outperforms others.",
"title": ""
},
{
"docid": "8f2fe2747f77c95150ff9134b57c5027",
"text": "To investigate structural changes in the retina by histologic evaluation and in vivo spectral domain optical coherence tomography (SD-OCT) following selective retina therapy (SRT) controlled by optical feedback techniques (OFT). SRT was applied to 12 eyes of Dutch Belted rabbits. Retinal changes were assessed based on fundus photography, fluorescein angiography (FAG), SD-OCT, light microscopy, transmission electron microscopy (TEM), and scanning electron microscopy (SEM) at each of the following time points: 1 h, and 1, 3, 7, 14 and 28 days after SRT. BrdU (5’-bromo-2’-deoxy-uridine) incorporation assay was also conducted to evaluate potential proliferation of RPE cells. SRT lesions at1 h after SRT were ophthalmoscopically invisible. FAG showed leakage in areas corresponding to SRT lesions, and hyperfluorescence disappeared after 7 days. SD-OCT showed that decreased reflectivity corresponding to RPE damage was restored to normal over time in SRT lesions. Histologic analysis revealed that the damage in SRT lesions was primarily limited to the retinal pigment epithelium (RPE) and the outer segments of the photoreceptors. SEM and TEM showed RPE cell migration by day 3 after SRT, and restoration of the RPE monolayer with microvilli by 1 week after SRT. At 14 and 28 days, ultrastructures of the RPE, including the microvilli and tight junctions, were completely restored. The outer segments of the photoreceptors also recovered without sequelae. Interdigitation between the RPE and photoreceptors was observed. BrdU incorporation assay revealed proliferation of RPE on day 3 after SRT, and peak proliferation was observed on day 7 after SRT. Based on multimodal imaging and histologic assessment, our findings demonstrate that SRT with OFT could selectively target the RPE without damaging the neurosensory retina. Therefore, the use of SRT with OFT opens the door to the possibility of clinical trials of well-defined invisible and nondestructive retina therapy, especially for macular disease.",
"title": ""
},
{
"docid": "4649a0a87bec26cda589081c47729b52",
"text": "The majority of the effort in metrics research has addressed research evaluation. Far less research has been done to address the unique problems of research planning. Models and maps of science that can address the detailed problems associated with research planning are needed. This article reports on the creation of an article-level model and map of science covering 16 years and nearly 20 million articles using co-citation-based techniques. The map is then used to define disciplinelike structures consisting of natural groupings of articles and clusters of articles. This combination of detail and high-level structure can be used to address planning-related problems such as identification of emerging topics, and the identification of which areas of science and technology are innovative and which are simply persisting. In addition to presenting the model and map, several process improvements that result in higher accuracy structures are detailed, including a bibliographic coupling approach for assigning current papers to co-citation clusters, and a sequentially hybrid approach to producing visual maps from models. Introduction The majority of the effort in metrics (sciento-, biblio-, infor-, alt-) studies has been aimed at research evaluation (B. R. Martin, Nightingale, & Yegros-Yegros, 2012). Examples of evaluation-related topics include impact factors, the h-index and other related indices, university rankings and national level science indicators. The 40+ year history of the use of documentbased indicators for research evaluation includes publication of handbooks (cf., Moed, Glänzel, & Schmoch, 2004), the introduction of new journals (e.g., Scientometrics, Journal of Informetrics) and several annual or biannual conferences (e.g., ISSI, Collnet, STI/ENID) specifically aimed at reporting on these activities. The evaluation of research using scientific and technical documents is a well-established area of research. Far less effort in metrics research has been aimed at the unique problems of research planning. Planning-related questions (Börner, Boyack, Milojević, & Morris, 2012) that are asked by funders, administrators and researchers are different from evaluation-related questions. As such, they require different models than those that are used for evaluation. For example, planning requires a model that can predict emerging topics in science and technology. Funders need models that can help them identify the most innovative and promising proposals and researchers. Administrators, particularly those in industry, need models to help them best allocate their internal research funds, including knowing which existing areas to cut. To enable detailed planning, document-based models of science and technology need to be highly granular, and while based on retrospective data, must be robust enough to enable forecasting. Overall, the This is a preprint of an article accepted for publication in Journal of the American Society for Information Science and Technology copyright © 2013 (American Society for Information Science and Technology) 2 technical requirements of a model that can be used for planning are unique. Research and development of such models is an under-developed area in metrics research. To that end, this article reports on a co-citation-based model and map of science comprised of nearly 20 million articles over 16 years that has the potential to be used to answer planningrelated questions. Although the model is similar in concept to one that has been previously reported (Klavans & Boyack, 2011), it differs in several significant aspects: it uses an improved current paper assignment process, it uses an improved map layout process, and it has been used to create article-level discipline-like structures to provide context for its detailed structure. In the balance of the article, we first review related work to provide context for the work reported here. We then detail the experiments that led to the improvements in our map creation process. This is followed by introduction and characterization of the model and map, along with a description of how the map was divided into discipline-like groupings. The paper concludes with a brief discussion of how the map will be used in the future for a variety of analyses. Background Science mapping, when reduced to its most basic components, is a combination of classification and visualization. We assume there is a structure to science, and then we seek to create a representation of that structure by partitioning sets of documents (or journals, authors, grants, etc.) into different groups. This act of partitioning is the classification part of science mapping, and typically takes the majority of the effort. The resulting classification system, along with some representation of the relationships between the partitions, can be thought of as a model of science inasmuch as it specifies structure. The visualization part of science mapping uses the classification and relationships as input, and creates a visual representation (or map) of that model as output. Mapping of scientific structure using data on scientific publications began not long after the introduction of ISI’s citation indexes in the 1950s. Since then, science mapping has been done at a variety of scales and with a variety of data types. Science mapping studies have been roughly evenly split between document, journal, and author-based maps. Both text-based and citationbased methods have been used. Many of these different types of studies have been reviewed at intervals in the past (Börner, Chen, & Boyack, 2003; Morris & Martens, 2008; White & McCain, 1997). Each type of study is aimed at answering certain types of questions. For example, authorbased maps are most often used to characterize the major topics within an area of science, and to show the key researchers in those topic areas. Journal-based models and maps are often used to characterize discipline-level structures in science. Overlay maps based on journals can be used to answer high level policy questions (Rafols, Porter, & Leydesdorff, 2010). However, more detailed questions, such as questions related to planning, require the use of document-level models and maps of science. The balance of this section thus focuses on document-level models and maps. When it comes to mapping of document sets, most studies have been done using local datasets. The term ‘local’ is used here to denote a small set of topics or a small subset of the whole of science. While these local studies have successfully been used to improve mapping techniques, and to provide detailed information about the areas they study, we prefer global mapping This is a preprint of an article accepted for publication in Journal of the American Society for Information Science and Technology copyright © 2013 (American Society for Information Science and Technology) 3 because of the increased context and accuracy that are enabled by mapping of all of science (Klavans & Boyack, 2011). The context for work presented here lies in the efforts undertaken since the 1970s to map all of science at the document level using citation-based techniques. The first map of worldwide science based on documents was created by Griffith, Small, Stonehill & Dey (1974). Their map, based on co-citation analysis, contained 1,310 highly cited references in 115 clusters, showing the most highly cited areas in biomedicine, physics, and chemistry. Henry Small continued generating document-level maps using co-citation analysis (Small, Sweeney, & Greenlee, 1985), typically using thresholds based on fractional citation counting that ended up keeping roughly (but not strictly) the top 1% of highly cited references by discipline. The mapping process and software created by Small at the Institute for Scientific Information (ISI) evolved to generate hierarchically nested maps with four levels. Small (1999) presents a four level map based on nearly 130,000 highly cited references from papers published in 1995, which contained nearly 19,000 clusters at its lowest level. At roughly the same time, the Center for Research Planning (CRP) was creating similar maps for the private sector using similar thresholds and methods (Franklin & Johnston, 1988). One major difference is that CRP’s maps only used one level of clustering rather than multiple levels. The next major step in mapping all of science at the document level took place in the mid-2000’s when Klavans & Boyack (2006) created co-citation models of over 700,000 reference papers and bibliographic coupling models of over 700,000 current papers from the 2002 fileyear of the combined Science and Social Science Citation Indexes. Later, Boyack (2009) used bibliographic coupling to create a model and map of nearly 1,000,000 documents in 117,000 clusters from the 2003 citation indexes. Through 2004, the citation indexes from ISI were the only comprehensive data source that could be used for such maps. The introduction of the Scopus database in late 2004 provided another data source that could be used for comprehensive models and maps of science. Klavans & Boyack (2010) used Scopus data from 2007 to create a co-citation model of science comprised of over 2,000,000 reference papers assigned to 84,000 clusters. Over 5,600,000 citing papers from 2003-2007 were assigned to these co-citation clusters based on reference patterns. All of the models and maps mentioned to this point have been static maps – that is they were all created using data from a single year, and were snapshot pictures of science at a single point in time. It is only recently that researchers have created maps of all of science that are longitudinal in nature. In the first of these, Klavans & Boyack (2011) extended their co-citation mapping approach by linking together nine annual models of science to generate a nine-year global map of science comprised of 10,360,000 papers from 2000-2008. The clusters in this model have been found",
"title": ""
},
{
"docid": "e14da1f5c1a6a03e3810d64366837e45",
"text": "Within the hard real-time community, static priority pre-emptive scheduling is receiving increased attention. Current optimal priority assignment schemes require that at some point in the system lifetime all tasks must be released simultaneously. Two main optimal priority assignment schemes have been proposed: rate-monotonic, where task period equals deadline, and deadlinemonotonic where task deadline maybe less than period. When tasks are permitted to have arbitrary start times, a common release time between all tasks in a task set may not occur. In this eventuality, both rate-monotonic and deadline-monotonic priority assignments cease to be optimal. This paper presents an method of determining if the tasks with arbitrary release times will ever share a common release time. This has complexity O(m loge m) in the longest task period. Also, an optimal priority assignment method is given, of complexity O(n 2 + n) in the number of tasks. Finally, an efficient feasibility test is presented, for those task sets whose tasks do not share a common release time.",
"title": ""
},
{
"docid": "5ce82b8c2cc87ae84026d230f3a97e06",
"text": "This paper presents a new physically-based method for predicting natural hairstyles in the presence of gravity and collisions. The method is based upon a mechanically accurate model for static elastic rods (Kirchhoff model), which accounts for the natural curliness of hair, as well as for hair ellipticity. The equilibrium shape is computed in a stable and easy way by energy minimization. This yields various typical hair configurations that can be observed in the real world, such as ringlets. As our results show, the method can generate different hair types with a very few input parameters, and perform virtual hairdressing operations such as wetting, cutting and drying hair.",
"title": ""
},
{
"docid": "3c29a0579a2f7d4f010b9b2f2df16e2c",
"text": "In recent years research on human activity recognition using wearable sensors has enabled to achieve impressive results on real-world data. However, the most successful activity recognition algorithms require substantial amounts of labeled training data. The generation of this data is not only tedious and error prone but also limits the applicability and scalability of today's approaches. This paper explores and systematically analyzes two different techniques to significantly reduce the required amount of labeled training data. The first technique is based on semi-supervised learning and uses self-training and co-training. The second technique is inspired by active learning. In this approach the system actively asks which data the user should label. With both techniques, the required amount of training data can be reduced significantly while obtaining similar and sometimes even better performance than standard supervised techniques. The experiments are conducted using one of the largest and richest currently available datasets.",
"title": ""
},
{
"docid": "7916a261319dad5f257a0b8e0fa97fec",
"text": "INTRODUCTION\nPreliminary research has indicated that recreational ketamine use may be associated with marked cognitive impairments and elevated psychopathological symptoms, although no study to date has determined how these are affected by differing frequencies of use or whether they are reversible on cessation of use. In this study we aimed to determine how variations in ketamine use and abstention from prior use affect neurocognitive function and psychological wellbeing.\n\n\nMETHOD\nWe assessed a total of 150 individuals: 30 frequent ketamine users, 30 infrequent ketamine users, 30 ex-ketamine users, 30 polydrug users and 30 controls who did not use illicit drugs. Cognitive tasks included spatial working memory, pattern recognition memory, the Stockings of Cambridge (a variant of the Tower of London task), simple vigilance and verbal and category fluency. Standardized questionnaires were used to assess psychological wellbeing. Hair analysis was used to verify group membership.\n\n\nRESULTS\nFrequent ketamine users were impaired on spatial working memory, pattern recognition memory, Stockings of Cambridge and category fluency but exhibited preserved verbal fluency and prose recall. There were no differences in the performance of the infrequent ketamine users or ex-users compared to the other groups. Frequent users showed increased delusional, dissociative and schizotypal symptoms which were also evident to a lesser extent in infrequent and ex-users. Delusional symptoms correlated positively with the amount of ketamine used currently by the frequent users.\n\n\nCONCLUSIONS\nFrequent ketamine use is associated with impairments in working memory, episodic memory and aspects of executive function as well as reduced psychological wellbeing. 'Recreational' ketamine use does not appear to be associated with distinct cognitive impairments although increased levels of delusional and dissociative symptoms were observed. As no performance decrements were observed in the ex-ketamine users, it is possible that the cognitive impairments observed in the frequent ketamine group are reversible upon cessation of ketamine use, although delusional symptoms persist.",
"title": ""
},
{
"docid": "fe55db2d04fdba4f4655e39520f135bd",
"text": "The application of virtual reality in e-commerce has enormous potential for transforming online shopping into a real-world equivalent. However, the growing research interest focuses on virtual reality technology adoption for the development of e-commerce environments without addressing social and behavioral facets of online shopping such as trust. At the same time, trust is a critical success factor for e-commerce and remains an open issue as to how it can be accomplished within an online store. This paper shows that the use of virtual reality for online shopping environments offers an advanced customer experience compared to conventional web stores and enables the formation of customer trust. The paper presents a prototype virtual shopping mall environment, designed on principles derived by an empirically tested model for building trust in e-commerce. The environment is evaluated with an empirical study providing evidence and explaining that a virtual reality shopping environment would be preferred by customers over a conventional web store and would facilitate the assessment of the e-vendor’s trustworthiness.",
"title": ""
},
{
"docid": "be0033b0f251970f8a8876b28cd2042e",
"text": "A power transformer will yield a frequency response which is unique to its mechanical geometry and electrical properties. Changes in the frequency response of a transformer can be potential indicators of winding deformation as well as other structural and electrical problems. A diagnostic tool which leverages this knowledge in order to detect such changes is frequency-response analysis (FRA). To date, FRA has been used to identify changes in a transformer's frequency response but with limited insight into the underlying cause of the change. However, there is now a growing research interest in specifically identifying the structural change in a transformer directly from its FRA signature. The aim of this paper is to support FRA interpretation through the development of wideband three-phase transformer models which are based on three types of FRA tests. The resulting models can be used as a flexible test bed for parameter sensitivity analysis, leading to greater insight into the effects that geometric change can have on transformer FRA. This paper will demonstrate the applicability of this modeling approach by simultaneously fitting each model to the corresponding FRA data sets without a priori knowledge of the transformer's internal dimensions, and then quantitatively assessing the accuracy of key model parameters.",
"title": ""
},
{
"docid": "14d3712efca71981103ba3ab44c39dd2",
"text": "This paper is survey of computational approaches for paraphrasing. Paraphrasing methods such as generation, identification and acquisition of phrases or sentences is a process that conveys same information. Paraphrasing is a process of expressing semantic content of source using different words to achieve the greater clarity. The task of generating or identifying the semantic equivalence for different elements of language such as words sentences; is an essential part of the natural language processing. Paraphrasing is being used for various natural language applications. This paper discuses paraphrase impact on few applications and also various paraphrasing methods.",
"title": ""
},
{
"docid": "b75f0349fdd9e5d2fa2441a22c2bf2e3",
"text": "Virtual Reality (VR) is a three-dimensional computer-generated virtual world. It is essential to introduced VR technology to education area to develop new teaching mode to improve the efficiency and quality of teaching and learning. Among them, VR classroom has quickly become most dazzling star with its subversive advantage. This paper proposes an overall integration solution of VR classroom, including its composition, its scene design of various disciplines and its main advantage. Finally, a case study of a geography lesson is provided to show its advantages and strong potentiality.",
"title": ""
},
{
"docid": "fbc148e6c44e7315d55f2f5b9a2a2190",
"text": "India contributes about 70% of malaria in the South East Asian Region of WHO. Although annually India reports about two million cases and 1000 deaths attributable to malaria, there is an increasing trend in the proportion of Plasmodium falciparum as the agent. There exists heterogeneity and variability in the risk of malaria transmission between and within the states of the country as many ecotypes/paradigms of malaria have been recognized. The pattern of clinical presentation of severe malaria has also changed and while multi-organ failure is more frequently observed in falciparum malaria, there are reports of vivax malaria presenting with severe manifestations. The high burden populations are ethnic tribes living in the forested pockets of the states like Orissa, Jharkhand, Madhya Pradesh, Chhattisgarh and the North Eastern states which contribute bulk of morbidity and mortality due to malaria in the country. Drug resistance, insecticide resistance, lack of knowledge of actual disease burden along with new paradigms of malaria pose a challenge for malaria control in the country. Considering the existing gaps in reported and estimated morbidity and mortality, need for estimation of true burden of malaria has been stressed. Administrative, financial, technical and operational challenges faced by the national programme have been elucidated. Approaches and priorities that may be helpful in tackling serious issues confronting malaria programme have been outlined.",
"title": ""
},
{
"docid": "4bb9186954536103422ef662dc7459bf",
"text": "Cantilevered beams with piezoceramic layers have been frequently used as piezoelectric vibration energy harvesters in the past five years. The literature includes several single degree-of-freedom models, a few approximate distributed parameter models and even some incorrect approaches for predicting the electromechanical behavior of these harvesters. In this paper, we present the exact analytical solution of a cantilevered piezoelectric energy harvester with Euler–Bernoulli beam assumptions. The excitation of the harvester is assumed to be due to its base motion in the form of translation in the transverse direction with small rotation, and it is not restricted to be harmonic in time. The resulting expressions for the coupled mechanical response and the electrical outputs are then reduced for the particular case of harmonic behavior in time and closed-form exact expressions are obtained. Simple expressions for the coupled mechanical response, voltage, current, and power outputs are also presented for excitations around the modal frequencies. Finally, the model proposed is used in a parametric case study for a unimorph harvester, and important characteristics of the coupled distributed parameter system, such as short circuit and open circuit behaviors, are investigated in detail. Modal electromechanical coupling and dependence of the electrical outputs on the locations of the electrodes are also discussed with examples. DOI: 10.1115/1.2890402",
"title": ""
},
{
"docid": "c2fc709aeb4c48a3bd2071b4693d4296",
"text": "Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30% on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.",
"title": ""
},
{
"docid": "334cbe19e968cb424719c7efbee9fd20",
"text": "We examine the relationship between scholarly practice and participatory technologies and explore how such technologies invite and reflect the emergence of a new form of scholarship that we call Networked Participatory Scholarship: scholars’ participation in online social networks to share, reflect upon, critique, improve, validate, and otherwise develop their scholarship. We discuss emergent techno-cultural pressures that may influence higher education scholars to reconsider some of the foundational principles upon which scholarship has been established due to the limitations of a pre-digital world, and delineate how scholarship itself is changing with the emergence of certain tools, social behaviors, and cultural expectations associated with participatory technologies. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8a00e80a705aa1ddd690c6b77844fdf4",
"text": "Network interface implementors have repeatedl y attempted to offload TCP processingfrom the host CPU. Theseefforts met with li ttle success,becausetheywere basedon faulty premises.TCPoffloadper se is neither of much overall benefitnor free from significant costs and risks. But TCPoffloadin theserviceof very specific goalsmight actually beuseful. In thecontext of the replacement of storage-spec ific interconnect via commoditized network hardware, TCP offload (and more generally, offloading the transport protocol) appropriately solvesanimportant problem.",
"title": ""
},
{
"docid": "6b85a2e17f4fe6073527ddf2d1f4d4c1",
"text": "EU policies call for the strengthening of Europe's innovative capacity and the development of a creative and knowledge-intensive economy and society through reinforcing the role of education and training in the knowledge triangle and focusing school curricula on creativity, innovation and entrepreneurship. This report brings evidence to the debate on the status, barriers and enablers for creativity and innovation in compulsory schooling in Europe. It is the final report of the project: ‘Creativity and Innovation in Education and Training in the EU27 (ICEAC)’ carried out by IPTS in collaboration with DG Education and Culture, highlighting the main messages gathered from each phase of the study: a literature review, a survey with teachers, an analysis of curricula and of good practices, stakeholder and expert interviews, and experts workshops. Based on this empirical material, five major areas for improvement are proposed to enable more creative learning and innovative teaching in Europe: curricula, pedagogies and assessment, teacher training, ICT and digital media, and educational culture and leadership. The study highlights the need for action at both national and European level to bring about the changes required for an open and innovative European educational culture based on the creative and innovative potential of its future generation. How to obtain EU publications Our priced publications are available from EU Bookshop (http://bookshop.europa.eu), where you can place an order with the sales agent of your choice. The Publications Office has a worldwide network of sales agents. You can obtain their contact details by sending a fax to (352) 29 29-42758. The mission of the Joint Research Centre is to provide customer-driven scientific and technical support for the conception, development, implementation and monitoring of European Union policies. As a service of the European Commission, the Joint Research Centre functions as a reference centre of science and technology for the Union. Close to the policy-making process, it serves the common interest of the Member States, while being independent of special interests, whether private or national. LF-N A -275-EN -C",
"title": ""
},
{
"docid": "5c3e29cb84663a3814d05934864eaf3f",
"text": "BACKGROUND\nAlthough physical activity has substantial health benefits and reduces mortality, few studies have examined its impact on survival beyond age 75.\n\n\nMETHODS\nUsing the population-based Leisure World Cohort Study, we explored the association of activity on all-cause mortality in older adults (median age at baseline = 74 years). We followed 8,371 women and 4,828 men for 28 years or until death (median = 13 years) and calculated relative risks for various measures of activity at baseline using Cox regression analysis for four age groups (<70, 70-74, 75-79, and 80+ years) in men and women separately.\n\n\nRESULTS\nTime spent in active activities, even ½ hour/day, resulted in significantly lower (15-35%) mortality risks compared with no time in active activities. This reduction was evident in all sex-age groups except the youngest men. Participants who reported spending 6 or more hours/day in other less physically demanding activities also had significantly reduced risks of death of 15-30%. The beneficial effect of activities was observed in both those who did and those who did not cut down their activities due to illness or injury. Neither adjustment for potential confounders, exclusion of the first 5 years of follow-up, nor exclusion of individuals with histories of chronic disease substantially changed the findings.\n\n\nCONCLUSIONS\nParticipation in leisure-time activities is an important health promoter in aging populations. The association of less physically demanding activities as well as traditional physical activities involving moderate exertion with reduced mortality suggests that the protective effect of engagement in activities is a robust one.",
"title": ""
}
] |
scidocsrr
|
a296a163da8d3dbdc9d30adf3c153efc
|
Reinventing the Wheel: Transforming Steering Wheel Systems for Autonomous Vehicles
|
[
{
"docid": "26429dfbcf0562376b3308882d5efbea",
"text": "This review discusses the methodology of the standardized on-the-road driving test and standard operation procedures to conduct the test and analyze the data. The on-the-road driving test has proven to be a sensitive and reliable method to examine driving ability after administration of central nervous system (CNS) drugs. The test is performed on a public highway in normal traffic. Subjects are instructed to drive with a steady lateral position and constant speed. Its primary parameter, the standard deviation of lateral position (SDLP), ie, an index of 'weaving', is a stable measure of driving performance with high test-retest reliability. SDLP differences from placebo are dose-dependent, and do not depend on the subject's baseline driving skills (placebo SDLP). It is important that standard operation procedures are applied to conduct the test and analyze the data in order to allow comparisons between studies from different sites.",
"title": ""
}
] |
[
{
"docid": "c4bf76c3bec23a990cfd9dda1f4ca350",
"text": "We propose a novel probabilistic framework for the extraction of density-based 3D shape descriptors using kernel density estimation. Our descriptors are derived from the probability density functions (pdf) of local surface features characterizing the 3D object geometry. Assuming that the shape of the 3D object is represented as a mesh consisting of triangles with arbitrary size and shape, we provide efficient means to approximate the moments of geometric features on a triangle basis. Our framework produces a number of 3D shape descriptors that prove to be quite discriminative in retrieval applications. We test our descriptors and compare them with several other histogram-based methods on two 3D model databases, Princeton Shape Benchmark and Sculpteur, which are fundamentally different in semantic content and mesh quality. Experimental results show that our methodology not only improves the performance of existing descriptors, but also provides a rigorous framework to advance and to test new ones.",
"title": ""
},
{
"docid": "34eb9d42e96ffc040b9c1e6c560b914c",
"text": "Deploy fuzzy logic engineering tools in the 0nance arena, speci0cally in the technical analysis 0eld. Since technical analysis theory consists of indicators used by experts to evaluate stock prices, the new proposed method maps these indicators into new inputs that can be fed into a fuzzy logic system. The only required inputs to these indicators are past sequence of stock prices. This method relies on fuzzy logic to formulate a decision making when certain price movements or certain price formations occur. The success of the system is measured by comparing system output versus stock price movement. The new stock evaluation method is proven to exceed market performance and it can be an excellent tool in the technical analysis 0eld. The 5exibility of the system is also demonstrated. c © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "547ce0778d8d51d96a610fb72b6bb4e9",
"text": "Applications in cyber-physical systems are increasingly coupled with online instruments to perform long-running, continuous data processing. Such “always on” dataflow applications are dynamic, where they need to change the applications logic and performance at runtime, in response to external operational needs. F`oε is a continuous dataflow framework that is designed to be adaptive for dynamic applications on Cloud infrastructure. It offers advanced dataflow patterns like BSP and MapReduce for flexible and holistic composition of streams and files, and supports dynamic recomposition at runtime with minimal impact on the execution. Adaptive resource allocation strategies allow our framework to effectively use elastic Cloud resources to meet varying data rates. We illustrate the design patterns of F`oε by running an integration pipeline and a tweet clustering application from the Smart Power Grids domain on a private Eucalyptus Cloud. The responsiveness of our resource adaptation is validated through simulations for periodic, bursty and random workloads.",
"title": ""
},
{
"docid": "fea8bf3ca00b3440c2b34188876917a2",
"text": "Digitalization has been identified as one of the major trends changing society and business. Digitalization causes changes for companies due to the adoption of digital technologies in the organization or in the operation environment. This paper discusses digitalization from the viewpoint of diverse case studies carried out to collect data from several companies, and a literature study to complement the data. This paper describes the first version of the digital transformation model, derived from synthesis of these industrial cases, explaining a starting point for a systematic approach to tackle digital transformation. The model is aimed to help companies systematically handle the changes associated with digitalization. The model consists of four main steps, starting with positioning the company in digitalization and defining goals for the company, and then analyzing the company’s current state with respect to digitalization goals. Next, a roadmap for reaching the goals is defined and implemented in the company. These steps are iterative and can be repeated several times. Although company situations vary, these steps will help to systematically approach digitalization and to take the steps necessary to benefit from it.",
"title": ""
},
{
"docid": "100c62f22feea14ac54c21408432c371",
"text": "Modern approach to the FOREX currency exchange market requires support from the computer algorithms to manage huge volumes of the transactions and to find opportunities in a vast number of currency pairs traded daily. There are many well known techniques used by market participants on both FOREX and stock-exchange markets (i.e. Fundamental and technical analysis) but nowadays AI based techniques seem to play key role in the automated transaction and decision supporting systems. This paper presents the comprehensive analysis over Feed Forward Multilayer Perceptron (ANN) parameters and their impact to accurately forecast FOREX trend of the selected currency pair. The goal of this paper is to provide information on how to construct an ANN with particular respect to its parameters and training method to obtain the best possible forecasting capabilities. The ANN parameters investigated in this paper include: number of hidden layers, number of neurons in hidden layers, use of constant/bias neurons, activation functions, but also reviews the impact of the training methods in the process of the creating reliable and valuable ANN, useful to predict the market trends. The experimental part has been performed on the historical data of the EUR/USD pair.",
"title": ""
},
{
"docid": "2ad8ce7f0463f508838e38e18c50bccd",
"text": "Contrastive summarization is the problem of jointly generating summaries for two entities in order to highlight their differences. In this paper we present an investigation into contrastive summarization through an implementation and evaluation of a contrastive opinion summarizer in the consumer reviews domain.",
"title": ""
},
{
"docid": "8031bea8ee2115e1ec32583d4234e92a",
"text": "In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the Free917 and WebQuestions benchmark datasets show our semantic parser improves over the state of the art.",
"title": ""
},
{
"docid": "5a232c84b76758acd1a44d42aaa3c064",
"text": "The OpenStreetMap (OSM) project, founded in 2004, has gathered an exceptional amount of interest in recent years and counts as one of the most impressive sources of Volunteered Geographic Information (VGI) on the Internet. In total, more than half a million members had registered for the project by the end of 2011. However, while this number of contributors seems impressive, questions remain about the individual contributions that have been made by the project members. This research article contains several studies regarding the contributions by the community of the project. The results show that only 38% (192,000) of the registered members carried out at least one edit in the OSM database and that only 5% (24,000) of all members actively contributed to the project in a more productive way. The majority of the members are located in Europe (72%) and each member has an activity area whose size may range from one soccer field up to more than 50 km 2 . In addition to several more analyses conducted for this article, predictions will be made about how this newly acquired knowledge can be used for future research.",
"title": ""
},
{
"docid": "ee21b2744b26a11647c72d09025a6e11",
"text": "This paper presents the design of microstrip-ridge gap waveguide using via-holes in printed circuit boards, a solution for high-frequency circuits. The study includes how to define the numerical ports, pin sensitivity, losses, and also a comparison with performance of normal microstrip lines and inverted microstrip lines. The results are produced using commercially available electromagnetic simulators. A WR-15 to microstrip-ridge gap waveguide transition was also designed. The results are verified with measurements on microstrip-ridge gap waveguides with WR15 transitions at both ends.",
"title": ""
},
{
"docid": "ff429302ec983dd1203ac6dd97506ef8",
"text": "Financial crises have occurred for many centuries. They are often preceded by a credit boom and a rise in real estate and other asset prices, as in the current crisis. They are also often associated with severe disruption in the real economy. This paper surveys the theoretical and empirical literature on crises. The first explanation of banking crises is that they are a panic. The second is that they are part of the business cycle. Modeling crises as a global game allows the two to be unified. With all the liquidity problems in interbank markets that have occurred during the current crisis, there is a growing literature on this topic. Perhaps the most serious market failure associated with crises is contagion, and there are many papers on this important topic. The relationship between asset price bubbles, particularly in real estate, and crises is discussed at length. Disciplines Economic Theory | Finance | Finance and Financial Management This journal article is available at ScholarlyCommons: http://repository.upenn.edu/fnce_papers/403 Financial Crises: Theory and Evidence Franklin Allen University of Pennsylvania Ana Babus Cambridge University Elena Carletti European University Institute",
"title": ""
},
{
"docid": "0bc1c637d6f4334dd8a27491ebde40d6",
"text": "Osteoarthritis of the hip describes a clinical syndrome of joint pain accompanied by varying degrees of functional limitation and reduced quality of life. Osteoarthritis may not be progressive and most patients will not need surgery, with their symptoms adequately controlled by non-surgical measures. The treatment of hip osteoarthritis is aimed at reducing pain and stiffness and improving joint mobility. Total hip replacement remains the most effective treatment option but it is a major surgery with potential serious complications. NICE guideline has suggested a holistic approach to management of hip osteoarthritis which includes both nonpharmacological and pharmacological treatments. The non-pharmacological treatments range from education ,physical therapy and behavioral changes ,walking aids .The ESCAPE( Enabling Self-Management and Coping of Arthritic Pain Through Exercise) rehabilitation programme for hip and knee osteoarthritis which integrates simple education, self-management and coping strategies, with an exercise regimen has shown to be more cost-effective than usual care. There is a choice of reviewed pharmacological treatments available, but there are few current reviews of possible nonpharmacological methods. This review will focus on the non-pharmacological and non-surgical methods.",
"title": ""
},
{
"docid": "4552e4542db450e98f4aee2e5a019f0f",
"text": "Time-series data is increasingly collected in many domains. One example is the smart electricity infrastructure, which generates huge volumes of such data from sources such as smart electricity meters. Although today these data are used for visualization and billing in mostly 15-min resolution, its original temporal resolution frequently is more fine-grained, e.g., seconds. This is useful for various analytical applications such as short-term forecasting, disaggregation and visualization. However, transmitting and storing huge amounts of such fine-grained data are prohibitively expensive in terms of storage space in many cases. In this article, we present a compression technique based on piecewise regression and two methods which describe the performance of the compression. Although our technique is a general approach for time-series compression, smart grids serve as our running example and as our evaluation scenario. Depending on the data and the use-case scenario, the technique compresses data by ratios of up to factor 5,000 while maintaining its usefulness for analytics. The proposed technique has outperformed related work and has been applied to three real-world energy datasets in different scenarios. Finally, we show that the proposed compression technique can be implemented in a state-of-the-art database management system.",
"title": ""
},
{
"docid": "1a44e040bbb5c81a53a1255fc7f5d4d7",
"text": "Information technology and the Internet have had a dramatic effect on business operations. Companies are making large investments in e-commerce applications but are hard pressed to evaluate the success of their e-commerce systems. The DeLone & McLean Information Systems Success Model can be adapted to the measurement challenges of the new e-commerce world. The six dimensions of the updated model are a parsimonious framework for organizing the e-commerce success metrics identified in the literature. Two case examples demonstrate how the model can be used to guide the identification and specification of e-commerce success metrics.",
"title": ""
},
{
"docid": "c638a99b471a97d690c1867408d0af7b",
"text": "The well known SIR models have been around for many years. Under some suitable assumptions, the models provide information about when does the epidemic occur and when it doesn’t. The models can incorporate the birth, death, and immunization and analyze the outcome mathematically. In this project we studied several SIR models including birth, death and immunization. We also studied the bifurcation analysis associated with the disease free and epidemic equilibrium.",
"title": ""
},
{
"docid": "7f68cbaa4fdc043cddd9fe625657610e",
"text": "While clustering is one of the most popular methods for data mining, analysts lack adequate tools for quick, iterative clustering analysis, which is essential for hypothesis generation and data reasoning. We introduce Clustrophile, an interactive tool for iteratively computing discrete and continuous data clusters, rapidly exploring different choices of clustering parameters, and reasoning about clustering instances in relation to data dimensions. Clustrophile combines three basic visualizations – a table of raw datasets, a scatter plot of planar projections, and a matrix diagram (heatmap) of discrete clusterings – through interaction and intermediate visual encoding. Clustrophile also contributes two spatial interaction techniques, forward projection and backward projection, and a visualization method, prolines, for reasoning about two-dimensional projections obtained through dimensionality reductions.",
"title": ""
},
{
"docid": "20705a14783c89ac38693b2202363c1f",
"text": "This paper analyzes the effect of employee recognition, pay, and benefits on job satisfaction. In this cross-sectional study, survey responses from university students in the U.S. (n = 457), Malaysia (n = 347) and Vietnam (n = 391) were analyzed. Employee recognition, pay, and benefits were found to have a significant impact on job satisfaction, regardless of home country income level (high, middle or low income) and culture (collectivist or individualist). However, the effect of benefits on job satisfaction was significantly more important for U.S. respondents than for respondents from Malaysia and Vietnam. The authors conclude that both financial and nonfinancial rewards have a role in influencing job satisfaction, which ultimately impacts employee performance. Theoretical and practical implications for developing effective recruitment and retention policies for employees are also discussed.",
"title": ""
},
{
"docid": "ac5f518cbd783060af1cf6700b994469",
"text": "Scalable evolutionary computation has. become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithmnamely elitism, niching, and restricted mating are not significantly improving the scalability problems.",
"title": ""
},
{
"docid": "f5ac213265b9ac8674af92fb2541cebd",
"text": "BACKGROUND\nCorneal oedema is a common post-operative problem that delays or prevents visual recovery from ocular surgery. Honey is a supersaturated solution of sugars with an acidic pH, high osmolarity and low water content. These characteristics inhibit the growth of micro-organisms, reduce oedema and promote epithelialisation. This clinical case series describes the use of a regulatory approved Leptospermum species honey ophthalmic product, in the management of post-operative corneal oedema and bullous keratopathy.\n\n\nMETHODS\nA retrospective review of 18 consecutive cases (30 eyes) with corneal oedema persisting beyond one month after single or multiple ocular surgical procedures (phacoemulsification cataract surgery and additional procedures) treated with Optimel Antibacterial Manuka Eye Drops twice to three times daily as an adjunctive therapy to conventional topical management with corticosteroid, aqueous suppressants, hypertonic sodium chloride five per cent, eyelid hygiene and artificial tears. Visual acuity and central corneal thickness were measured before and at the conclusion of Optimel treatment.\n\n\nRESULTS\nA temporary reduction in corneal epithelial oedema lasting up to several hours was observed after the initial Optimel instillation and was associated with a reduction in central corneal thickness, resolution of epithelial microcysts, collapse of epithelial bullae, improved corneal clarity, improved visualisation of the intraocular structures and improved visual acuity. Additionally, with chronic use, reduction in punctate epitheliopathy, reduction in central corneal thickness and improvement in visual acuity were achieved. Temporary stinging after Optimel instillation was experienced. No adverse infectious or inflammatory events occurred during treatment with Optimel.\n\n\nCONCLUSIONS\nOptimel was a safe and effective adjunctive therapeutic strategy in the management of persistent post-operative corneal oedema and warrants further investigation in clinical trials.",
"title": ""
},
{
"docid": "9c44b6e7b91ecfeab5bba95a25d59401",
"text": "Many recent papers address reading comprehension, where examples consist of (question, passage, answer) tuples. Presumably, a model must combine information from both questions and passages to predict corresponding answers. However, despite intense interest in the topic, with hundreds of published papers vying for leaderboard dominance, basic questions about the difficulty of many popular benchmarks remain unanswered. In this paper, we establish sensible baselines for the bAbI, SQuAD, CBT, CNN, and Whodid-What datasets, finding that questionand passage-only models often perform surprisingly well. On 14 out of 20 bAbI tasks, passage-only models achieve greater than 50% accuracy, sometimes matching the full model. Interestingly, while CBT provides 20-sentence passages, only the last is needed for comparably accurate prediction. By comparison, SQuAD and CNN appear better-constructed.",
"title": ""
},
{
"docid": "4aebb6566c8b27c7528cc108bacc2a60",
"text": "OBJECT\nSuperior cluneal nerve (SCN) entrapment neuropathy is a poorly understood clinical entity that can produce low-back pain. The authors report a less-invasive surgical treatment for SCN entrapment neuropathy that can be performed with local anesthesia.\n\n\nMETHODS\nFrom November 2010 through November 2011, the authors performed surgery in 34 patients (age range 18-83 years; mean 64 years) with SCN entrapment neuropathy. The entrapment was unilateral in 13 patients and bilateral in 21. The mean postoperative follow-up period was 10 months (range 6-18 months). After the site was blocked with local anesthesia, the thoracolumbar fascia of the orifice was dissected with microscissors in a distal-to-rostral direction along the SCN to release the entrapped nerve.\n\n\nRESULTS\nwere evaluated according to Japanese Orthopaedic Association (JOA) and Roland-Morris Disability Questionnaire (RMDQ) scores. Results In all 34 patients, the SCN penetrated the orifice of the thoracolumbar fascia and could be released by dissection of the fascia. There were no intraoperative surgery-related complications. For all patients, surgery was effective; JOA and RMDQ scores indicated significant improvement (p < 0.05).\n\n\nCONCLUSIONS\nFor patients with low-back pain, SCN entrapment neuropathy must be considered as a causative factor. Treatment by less-invasive surgery, with local anesthesia, yielded excellent clinical outcomes.",
"title": ""
}
] |
scidocsrr
|
98286f3330d5ad82112f44c22764f2eb
|
Understanding Convolutional Neural Networks with A Mathematical Model
|
[
{
"docid": "8fd893ef59f788742de78d8a279496ca",
"text": "A wavelet scattering network computes a translation invariant image representation, which is stable to deformations and preserves high frequency information for classification. It cascades wavelet transform convolutions with non-linear modulus and averaging operators. The first network layer outputs SIFT-type descriptors whereas the next layers provide complementary invariant information which improves classification. The mathematical analysis of wavelet scattering networks explain important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having same Fourier power spectrum. State of the art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.",
"title": ""
}
] |
[
{
"docid": "bc262b5366f1bf14e5120f68df8f5254",
"text": "BACKGROUND\nThe aim of this study was to compare the results of laparoscopy-assisted total gastrectomy with those of open total gastrectomy for early gastric cancer.\n\n\nMETHODS\nPatients with gastric cancer who underwent total gastrectomy with curative intent in three Korean tertiary hospitals between January 2003 and December 2010 were included in this multicentre, retrospective, propensity score-matched cohort study. Cox proportional hazards regression models were used to evaluate the association between operation method and survival.\n\n\nRESULTS\nA total of 753 patients with early gastric cancer were included in the study. There were no significant differences in the matched cohort for overall survival (hazard ratio (HR) for laparoscopy-assisted versus open total gastrectomy 0.96, 95 per cent c.i. 0.57 to 1.65) or recurrence-free survival (HR 2.20, 0.51 to 9.52). The patterns of recurrence were no different between the two groups. The severity of complications, according to the Clavien-Dindo classification, was similar in both groups. The most common complications were anastomosis-related in the laparoscopy-assisted group (8.0 per cent versus 4.2 per cent in the open group; P = 0.015) and wound-related in the open group (1.6 versus 5.6 per cent respectively; P = 0.003). Postoperative death was more common in the laparoscopy-assisted group (1.6 versus 0.2 per cent; P = 0.045).\n\n\nCONCLUSION\nLaparoscopy-assisted total gastrectomy for early gastric cancer is feasible in terms of long-term results, including survival and recurrence. However, a higher postoperative mortality rate and an increased risk of anastomotic leakage after laparoscopic-assisted total gastrectomy are of concern.",
"title": ""
},
{
"docid": "1afc103a3878d859ec15929433f49077",
"text": "Large-scale deep neural networks (DNNs) are both compute and memory intensive. As the size of DNNs continues to grow, it is critical to improve the energy efficiency and performance while maintaining accuracy. For DNNs, the model size is an important factor affecting performance, scalability and energy efficiency. Weight pruning achieves good compression ratios but suffers from three drawbacks: 1) the irregular network structure after pruning, which affects performance and throughput; 2) the increased training complexity; and 3) the lack of rigirous guarantee of compression ratio and inference accuracy.\n To overcome these limitations, this paper proposes CirCNN, a principled approach to represent weights and process neural networks using block-circulant matrices. CirCNN utilizes the Fast Fourier Transform (FFT)-based fast multiplication, simultaneously reducing the computational complexity (both in inference and training) from O(n2) to O(n log n) and the storage complexity from O(n2) to O(n), with negligible accuracy loss. Compared to other approaches, CirCNN is distinct due to its mathematical rigor: the DNNs based on CirCNN can converge to the same \"effectiveness\" as DNNs without compression. We propose the CirCNN architecture, a universal DNN inference engine that can be implemented in various hardware/software platforms with configurable network architecture (e.g., layer type, size, scales, etc.). In CirCNN architecture: 1) Due to the recursive property, FFT can be used as the key computing kernel, which ensures universal and small-footprint implementations. 2) The compressed but regular network structure avoids the pitfalls of the network pruning and facilitates high performance and throughput with highly pipelined and parallel design. To demonstrate the performance and energy efficiency, we test CirCNN in FPGA, ASIC and embedded processors. Our results show that CirCNN architecture achieves very high energy efficiency and performance with a small hardware footprint. Based on the FPGA implementation and ASIC synthesis results, CirCNN achieves 6 - 102X energy efficiency improvements compared with the best state-of-the-art results.",
"title": ""
},
{
"docid": "321309a290260c353de1c1e8a84ccb22",
"text": "The eld of Qualitative Spatial Reasoning is now an active research area in its own right within AI (and also in Geographical Information Systems) having grown out of earlier work in philosophical logic and more general Qualitative Reasoning in AI. In this paper (which is an updated version of 25]) I will survey the state of the art in Qualitative Spatial Reasoning, covering representation and reasoning issues as well as pointing to some application areas. 1 What is Qualitative Reasoning? The principal goal of Qualitative Reasoning (QR) 129] is to represent not only our everyday commonsense knowledge about the physical world, but also the underlying abstractions used by engineers and scientists when they create quantitative models. Endowed with such knowledge, and appropriate reasoning methods , a computer could make predictions, diagnoses and explain the behaviour of physical systems in a qualitative manner, even when a precise quantitative description is not available 1 or is computationally intractable. The key to a qualitative representation is not simply that it is symbolic, and utilises discrete quantity spaces, but that the distinctions made in these discretisations are relevant to the behaviour being modelled { i.e. distinctions are only introduced if they are necessary to model some particular aspect of the domain with respect to the task in hand. Even very simple quantity spaces can be very useful, e.g. the quantity space consisting just of f?; 0; +g, representing the two semi-open intervals of the real number line, and their dividing point, is widely used in the literature, e.g. 129]. Given such a quantity space, one then wants to be able to compute with it. There is normally a natural ordering (either partial or total) associated with a quantity space, and one form of simple but eeective inference 1 Note that although one use for qualitative reasoning is that it allows inferences to be made in the absence of complete knowledge, it does this not by probabilistic or fuzzy techniques (which may rely on arbitrarily assigned probabilities or membership values) but by refusing to diierentiate between quantities unless there is suucient evidence to do so; this is achieved essentially by collapsingìndistinguishable' values into an equivalence class which becomes a qualitative quantity. (The case where the indistinguishability relation is not an equivalence relation has not been much considered, except by 86, 83].)",
"title": ""
},
{
"docid": "282ace724b3c9a2e8b051499ba5e4bfe",
"text": "Fog computing, being an extension to cloud computing has addressed some issues found in cloud computing by providing additional features, such as location awareness, low latency, mobility support, and so on. Its unique features have also opened a way toward security challenges, which need to be focused for making it bug-free for the users. This paper is basically focusing on overcoming the security issues encountered during the data outsourcing from fog client to fog node. We have added Shibboleth also known as security and cross domain access control protocol between fog client and fog node for improved and secure communication between the fog client and fog node. Furthermore to prove whether Shibboleth meets the security requirement needed to provide the secure outsourcing. We have also formally verified the protocol against basic security properties using high level Petri net.",
"title": ""
},
{
"docid": "7baf37974303e6f83f52ff47c441387f",
"text": "We present a novel Bayesian model for semi-supervised part-of-speech tagging. Our model extends the Latent Dirichlet Allocation model and incorporates the intuition that words’ distributions over tags, p(t|w), are sparse. In addition we introduce a model for determining the set of possible tags of a word which captures important dependencies in the ambiguity classes of words. Our model outperforms the best previously proposed model for this task on a standard dataset.",
"title": ""
},
{
"docid": "7df3fe3ffffaac2fb6137fdc440eb9f4",
"text": "The amount of information in medical publications continues to increase at a tremendous rate. Systematic reviews help to process this growing body of information. They are fundamental tools for evidence-based medicine. In this paper, we show that automatic text classification can be useful in building systematic reviews for medical topics to speed up the reviewing process. We propose a per-question classification method that uses an ensemble of classifiers that exploit the particular protocol of a systematic review. We also show that when integrating the classifier in the human workflow of building a review the per-question method is superior to the global method. We test several evaluation measures on a real dataset.",
"title": ""
},
{
"docid": "bc018ef7cbcf7fc032fe8556016d08b1",
"text": "This paper presents a simple, efficient, yet robust approach, named joint-scale local binary pattern (JLBP), for texture classification. In the proposed approach, the joint-scale strategy is developed firstly, and the neighborhoods of different scales are fused together by a simple arithmetic operation. And then, the descriptor is extracted from the mutual integration of the local patches based on the conventional local binary pattern (LBP). The proposed scheme can not only describe the micro-textures of a local structure, but also the macro-textures of a larger area because of the joint of multiple scales. Further, motivated by the completed local binary pattern (CLBP) scheme, the completed JLBP (CJLBP) is presented to enhance its power. The proposed descriptor is evaluated in relation to other recent LBP-based patterns and non-LBP methods on popular benchmark texture databases, Outex, CURet and UIUC. Generally, the experimental results show that the new method performs better than the state-of-the-art techniques.",
"title": ""
},
{
"docid": "5516a1459b44b340c930e8a2ed3ca152",
"text": "Laboratory testing is important in the diagnosis and monitoring of liver injury and disease. Current liver tests include plasma markers of injury (e.g. aminotransferases, γ-glutamyl transferase, and alkaline phosphatase), markers of function (e.g. prothrombin time, bilirubin), viral hepatitis serologies, and markers of proliferation (e.g. α-fetoprotein). Among the injury markers, the alanine and aspartate aminotransferases (ALT and AST, respectively) are the most commonly used. However, interpretation of ALT and AST plasma levels can be complicated. Furthermore, both have poor prognostic utility in acute liver injury and liver failure. New biomarkers of liver injury are rapidly being developed, and the US Food and Drug Administration the European Medicines Agency have recently expressed support for use of some of these biomarkers in drug trials. The purpose of this paper is to review the history of liver biomarkers, to summarize mechanisms and interpretation of ALT and AST elevation in plasma in liver injury (particularly acute liver injury), and to discuss emerging liver injury biomarkers that may complement or even replace ALT and AST in the future.",
"title": ""
},
{
"docid": "8d92c2ec5c2372c7bb676ee7b8b0b511",
"text": "A 6-year-old boy was admitted to the emergency department (ED) suffering from petechiae and purpura on his face caused by a farming accident. He got his T-shirt caught in a rotating shaft at the back of a tractor. The T-shirt wrapped around his thorax and compressed him. He did not lose his consciousness during the incident. His score on the Glasgow Coma Scale was 15 and his initial vital signs were stable upon arrival at the ED. On physical examination, diffuse petechiae and purpura were noted on the face and neck although there was not any sign of the direct trauma (Figs. 1 and 2). The patient denied suffering head trauma. Examination for abdominal and thoracic organ injury was negative. Traumatic asphyxia is a rare condition presenting with cervicofacial cyanosis and edema, subconjunctival hemorrhage, and petechial hemorrhages of the face, neck, and upper chest that occurs due to a compressive force to the thoracoabdominal region [1]. Although the exact mechanism is controversial, it is probably due to thoracoabdominal compression causing increased intrathoracic pressure just at the moment of the event. The fear response, which is characterized by taking and holding a deep breath and closure of the glottis, also contributes to this process [1, 2]. This back pressure is transmitted ultimately to the head and neck veins and capillaries, with stasis and rupture producing characteristic petechial and subconjunctival hemorrhages [2]. The skin of the face, neck, and upper torso may appear blue-red to blue-black but it blanches over time. The discoloration and petechiae are often more prominent on the eyelids, nose, and lips [3]. In patients with traumatic asphyxia, injuries associated with other systems may also accompany the condition. Jongewaard et al. reported chest wall and intrathoracic injuries in 11 patients, loss of consciousness in 8, prolonged confusion in 5, seizures in 2, and visual disturbances in 2 of 14 patients with traumatic asphyxia [4]. Pulmonary contusion, hemothorax, pneumothorax, prolonged loss of consciousness, Int J Emerg Med (2009) 2:255–256 DOI 10.1007/s12245-009-0115-x",
"title": ""
},
{
"docid": "899e96eacd2c73730c157056c56eea25",
"text": "Hyaluronic acid (HA), a macropolysaccharidic component of the extracellular matrix, is common to most species and it is found in many sites of the human body, including skin and soft tissue. Not only does HA play a variety of roles in physiologic and in pathologic events, but it also has been extensively employed in cosmetic and skin-care products as drug delivery agent or for several biomedical applications. The most important limitations of HA are due to its short half-life and quick degradation in vivo and its consequently poor bioavailability. In the aim to overcome these difficulties, HA is generally subjected to several chemical changes. In this paper we obtained an acetylated form of HA with increased bioavailability with respect to the HA free form. Furthermore, an improved radical scavenging and anti-inflammatory activity has been evidenced, respectively, on ABTS radical cation and murine monocyte/macrophage cell lines (J774.A1).",
"title": ""
},
{
"docid": "a49c8e6f222b661447d1de32e29d0f16",
"text": "The discovery of ammonia oxidation by mesophilic and thermophilic Crenarchaeota and the widespread distribution of these organisms in marine and terrestrial environments indicated an important role for them in the global nitrogen cycle. However, very little is known about their physiology or their contribution to nitrification. Here we report oligotrophic ammonia oxidation kinetics and cellular characteristics of the mesophilic crenarchaeon ‘Candidatus Nitrosopumilus maritimus’ strain SCM1. Unlike characterized ammonia-oxidizing bacteria, SCM1 is adapted to life under extreme nutrient limitation, sustaining high specific oxidation rates at ammonium concentrations found in open oceans. Its half-saturation constant (Km = 133 nM total ammonium) and substrate threshold (≤10 nM) closely resemble kinetics of in situ nitrification in marine systems and directly link ammonia-oxidizing Archaea to oligotrophic nitrification. The remarkably high specific affinity for reduced nitrogen (68,700 l per g cells per h) of SCM1 suggests that Nitrosopumilus-like ammonia-oxidizing Archaea could successfully compete with heterotrophic bacterioplankton and phytoplankton. Together these findings support the hypothesis that nitrification is more prevalent in the marine nitrogen cycle than accounted for in current biogeochemical models.",
"title": ""
},
{
"docid": "1571716e227952689021a6d702074670",
"text": "Microorganisms form diverse multispecies communities in various ecosystems. The high abundance of fungal and bacterial species in these consortia results in specific communication between the microorganisms. A key role in this communication is played by secondary metabolites (SMs), which are also called natural products. Recently, it was shown that interspecies \"talk\" between microorganisms represents a physiological trigger to activate silent gene clusters leading to the formation of novel SMs by the involved species. This review focuses on mixed microbial cultivation, mainly between bacteria and fungi, with a special emphasis on the induced formation of fungal SMs in co-cultures. In addition, the role of chromatin remodeling in the induction is examined, and methodical perspectives for the analysis of natural products are presented. As an example for an intermicrobial interaction elucidated at the molecular level, we discuss the specific interaction between the filamentous fungi Aspergillus nidulans and Aspergillus fumigatus with the soil bacterium Streptomyces rapamycinicus, which provides an excellent model system to enlighten molecular concepts behind regulatory mechanisms and will pave the way to a novel avenue of drug discovery through targeted activation of silent SM gene clusters through co-cultivations of microorganisms.",
"title": ""
},
{
"docid": "1ad5568fd516295e1726a6f5c0c7ff29",
"text": "Although animal flight has a history of 300 million years, serious thought about human flight has a history of a few hundred years, dating from Leonardo da Vinci, 1 and successful human flight has only been achieved during the last 110 years. This is summarized in the attached figures 7.1-7.4. To some extent, this parallels the history of computing. Serious thought about computing dates back to Pascal and Leibnitz. While there was a notable attempt by Babbage to build a working computer in the 19 th century, successful electronic computers were finally achieved in the 40s, almost exactly contemporaneously with the development of the first successful jet aircraft. The early history of computers is summarized in figures 7.5-7.8. Tables 7.1 and 7.2 summarize the more recent progress in the development of supercomputers and microprocessors. Although airplane design had reached quite an advanced level by the 30s, exemplified by aircraft such as the DC-3 (Douglas Commercial-3) and the Spitfire (figure 7.2), the design of high speed aircraft requires an entirely new level of sophistication. This has led to a fusion of engineering, mathematics and computing, as indicated in figure 7.9.",
"title": ""
},
{
"docid": "2eff0a817a48a2fd62e6f834d0389105",
"text": "In this paper, we demonstrate that image reconstruction can be expressed in terms of neural networks. We show that filtered backprojection can be mapped identically onto a deep neural network architecture. As for the case of iterative reconstruction, the straight forward realization as matrix multiplication is not feasible. Thus, we propose to compute the back-projection layer efficiently as fixed function and its gradient as projection operation. This allows a data-driven approach for joint optimization of correction steps in projection domain and image domain. As a proof of concept, we demonstrate that we are able to learn weightings and additional filter layers that consistently reduce the reconstruction error of a limited angle reconstruction by a factor of two while keeping the same computational complexity as filtered back-projection. We believe that this kind of learning approach can be extended to any common CT artifact compensation heuristic and will outperform hand-crafted artifact correction methods in the future.",
"title": ""
},
{
"docid": "d46434bbbf73460bf422ebe4bd65b590",
"text": "We present an efficient block-diagonal approximation to the Gauss-Newton matrix for feedforward neural networks. Our resulting algorithm is competitive against state-of-the-art first-order optimisation methods, with sometimes significant improvement in optimisation performance. Unlike first-order methods, for which hyperparameter tuning of the optimisation parameters is often a laborious process, our approach can provide good performance even when used with default settings. A side result of our work is that for piecewise linear transfer functions, the network objective function can have no differentiable local maxima, which may partially explain why such transfer functions facilitate effective optimisation.",
"title": ""
},
{
"docid": "1682c1be8397a4d8e859e76cdc849740",
"text": "With the advent of RFLPs, genetic linkage maps are now being assembled for a number of organisms including both inbred experimental populations such as maize and outbred natural populations such as humans. Accurate construction of such genetic maps requires multipoint linkage analysis of particular types of pedigrees. We describe here a computer package, called MAPMAKER, designed specifically for this purpose. The program uses an efficient algorithm that allows simultaneous multipoint analysis of any number of loci. MAPMAKER also includes an interactive command language that makes it easy for a geneticist to explore linkage data. MAPMAKER has been applied to the construction of linkage maps in a number of organisms, including the human and several plants, and we outline the mapping strategies that have been used.",
"title": ""
},
{
"docid": "56b706edc6d1b6a2ff64770cb3f79c2e",
"text": "The ancient oriental game of Go has long been considered a grand challenge for artificial intelligence. For decades, computer Go has defied the classical methods in game tree search that worked so successfully for chess and checkers. However, recent play in computer Go has been transformed by a new paradigm for tree search based on Monte-Carlo methods. Programs based on Monte-Carlo tree search now play at human-master levels and are beginning to challenge top professional players. In this paper, we describe the leading algorithms for Monte-Carlo tree search and explain how they have advanced the state of the art in computer Go.",
"title": ""
},
{
"docid": "f3e219c14f495762a2a6ced94708a477",
"text": "We present novel empirical observations regarding how stochastic gradient descent (SGD) navigates the loss landscape of over-parametrized deep neural networks (DNNs). These observations expose the qualitatively different roles of learning rate and batch-size in DNN optimization and generalization. Specifically we study the DNN loss surface along the trajectory of SGD by interpolating the loss surface between parameters from consecutive iterations and tracking various metrics during training. We find that the loss interpolation between parameters before and after each training iteration’s update is roughly convex with a minimum (valley floor) in between for most of the training. Based on this and other metrics, we deduce that for most of the training update steps, SGD moves in valley like regions of the loss surface by jumping from one valley wall to another at a height above the valley floor. This ’bouncing between walls at a height’ mechanism helps SGD traverse larger distance for small batch sizes and large learning rates which we find play qualitatively different roles in the dynamics. While a large learning rate maintains a large height from the valley floor, a small batch size injects noise facilitating exploration. We find this mechanism is crucial for generalization because the valley floor has barriers and this exploration above the valley floor allows SGD to quickly travel far away from the initialization point (without being affected by barriers) and find flatter regions, corresponding to better generalization.",
"title": ""
},
{
"docid": "53cd4fbcffbc4434d03ab393f2082219",
"text": "This study aims to demonstrate the interaction between the human being and the machine through a neural pattern recognizing interface, namely Emotiv EPOC, and a robotic device made by Arduino. The union of these technologies is assessed in specific tests, seeking a usable and stable binding with the smallest possible rate of error, based on a study of how the human electrical synapses are produced and captured by the electroencephalogram device, through examples of projects that achieved success using these technologies. In this study, the whole configuration of the software used to bind these technologies, as well as how they work, is explained, and the result of the experiments through an analysis of the tests performed is addressed. The difference in the results between genders and the influence of user feedback, as well as the accuracy of the technologies, are explained during the analysis of the data captured.",
"title": ""
},
{
"docid": "09be2c69afdd2f1cfd6f1d8c1583a0ac",
"text": "We present a real-time visual-based road following method for mobile robots in outdoor environments. The approach combines an image processing method, that allows to retrieve illumination invariant images, with an efficient path following algorithm. The method allows a mobile robot to autonomously navigate along pathways of different types in adverse lighting conditions using monocular vision. To validate the proposed method, we have evaluated its ability to correctly determine boundaries of pathways in a challenging outdoor dataset. Moreover, the method's performance was tested on a mobile robotic platform that autonomously navigated long paths in urban parks. The experiments demonstrated that the mobile robot was able to identify outdoor pathways of different types and navigate through them despite the presence of shadows that significantly influenced the paths' appearance.",
"title": ""
}
] |
scidocsrr
|
71bf2a5e8681e4a6a0c7efc339c7994f
|
A Novel Electricity Transaction Mode of Microgrids Based on Blockchain and Continuous Double Auction
|
[
{
"docid": "4ae82b3362756b0efed84596076ea6fb",
"text": "Smart grids equipped with bi-directional communication flow are expected to provide more sophisticated consumption monitoring and energy trading. However, the issues related to the security and privacy of consumption and trading data present serious challenges. In this paper we address the problem of providing transaction security in decentralized smart grid energy trading without reliance on trusted third parties. We have implemented a proof-of-concept for decentralized energy trading system using blockchain technology, multi-signatures, and anonymous encrypted messaging streams, enabling peers to anonymously negotiate energy prices and securely perform trading transactions. We conducted case studies to perform security analysis and performance evaluation within the context of the elicited security and privacy requirements.",
"title": ""
}
] |
[
{
"docid": "15f2f4ba8635366e5f2879d085511f46",
"text": "Vessel segmentation is a key step for various medical applications, it is widely used in monitoring the disease progression, and evaluation of various ophthalmologic diseases. However, manual vessel segmentation by trained specialists is a repetitive and time-consuming task. In the last two decades, many approaches have been introduced to segment the retinal vessels automatically. With the more recent advances in the field of neural networks and deep learning, multiple methods have been implemented with focus on the segmentation and delineation of the blood vessels. Deep Learning methods, such as the Convolutional Neural Networks (CNN), have recently become one of the new trends in the Computer Vision area. Their ability to find strong spatially local correlations in the data at different abstraction levels allows them to learn a set of filters that are useful to correctly segment the data, when given a labeled training set. In this dissertation, different approaches based on deep learning techniques for the segmentation of retinal blood vessels are studied. Furthermore, in this dissertation are also studied and evaluated the different techniques that have been used for vessel segmentation, based on machine learning (Random Forests and Support vector machine algorithms), and how these can be combined with the deep learning approaches.",
"title": ""
},
{
"docid": "4f7c309f9a495faa53f2bb11e5885aa4",
"text": "Three different RF chain architectures operating in the FSS (Fixed Satellite Services) + BSS (Broadcast Satellite) spectrum are presented and discussed. The RF chains are based on a common wideband corrugated feed horn, but differ on the approach used for bands and polarizations separation. A breadboard of a novel self-diplexed configuration has been designed, manufactured and tested. It proves to be the preferred candidate for bandwidth, losses and power handling. Very good correlation of the RF performance to the theoretical design is found.",
"title": ""
},
{
"docid": "0209132c7623c540c125a222552f33ac",
"text": "This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm, is also a poor choice in the case of virtual marketing and identifies two main limitations of the framework in online environments: the drastically diminished role of the Ps and the lack of any strategic elements in the model. Next to identifying the critical factors of the Web marketing, the paper argues that the basis for successful E-Commerce is the full integration of the virtual activities into the company’s physical strategy, marketing plan and organisational processes. The four S elements of the Web-Marketing Mix framework present a sound and functional conceptual basis for designing, developing and commercialising Business-to-Consumer online projects. The model was originally developed for educational purposes and has been tested and refined by means of field projects; two of them are presented as case studies in the paper. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "dc4a2fa822a685997c83e6fd49b30f56",
"text": "Complex event processing (CEP) has become increasingly important for tracking and monitoring applications ranging from health care, supply chain management to surveillance. These monitoring applications submit complex event queries to track sequences of events that match a given pattern. As these systems mature the need for increasingly complex nested sequence queries arises, while the state-of-the-art CEP systems mostly focus on the execution of flat sequence queries only. In this paper, we now introduce an iterative execution strategy for nested CEP queries composed of sequence, negation, AND and OR operators. Lastly we have introduced the promising direction of applying selective caching of intermediate results to optimize the execution. Our experimental study using real-world stock trades evaluates the performance of our proposed iterative execution strategy for different query types.",
"title": ""
},
{
"docid": "edccb0babf1e6fe85bb1d7204ab0ea0a",
"text": "OBJECTIVE\nControlled study of the long-term outcome of selective mutism (SM) in childhood.\n\n\nMETHOD\nA sample of 33 young adults with SM in childhood and two age- and gender-matched comparison groups were studied. The latter comprised 26 young adults with anxiety disorders in childhood (ANX) and 30 young adults with no psychiatric disorders during childhood. The three groups were compared with regard to psychiatric disorder in young adulthood by use of the Composite International Diagnostic Interview (CIDI). In addition, the effect of various predictors on outcome of SM was studied.\n\n\nRESULTS\nThe symptoms of SM improved considerably in the entire SM sample. However, both SM and ANX had significantly higher rates for phobic disorder and any psychiatric disorder than controls at outcome. Taciturnity in the family and, by trend, immigrant status and a severity indicator of SM had an impact on psychopathology and symptomatic outcome in young adulthood.\n\n\nCONCLUSION\nThis first controlled long-term outcome study of SM provides evidence of symptomatic improvement of SM in young adulthood. However, a high rate of phobic disorder at outcome points to the fact that SM may be regarded as an anxiety disorder variant.",
"title": ""
},
{
"docid": "2907d1078ce8eaf8b01817cea3b9264c",
"text": "Having a reliable understanding about the behaviours, problems, and performance of existing processes is important in enabling a targeted process improvement initiative. Recently, there has been an increase in the application of innovative process mining techniques to facilitate evidence-based understanding about organizations’ business processes. Nevertheless, the application of these techniques in the domain of finance in Australia is, at best, scarce. This paper details a 6-month case study on the application of process mining in one of the largest insurance companies in Australia. In particular, the challenges encountered, the lessons learned, and the results obtained from this case study are detailed. Through this case study, we not only validated existing ‘lessons learned’ from other similar case studies, but also added new insights that can be beneficial to other practitioners in applying process mining in their respective fields.",
"title": ""
},
{
"docid": "a7f9da2652de7f00a30ebbe59098ae80",
"text": "Wireless Sensor Networks (WSNs) are becoming increasingly popular since they can gather information from different locations without wires. This advantage is exploited in applications such as robotic systems, telecare, domotic or smart cities, among others. To gain independence from the electricity grid, WSNs devices are equipped with batteries, therefore their operational time is determined by the time that the batteries can power on the device. As a consequence, engineers must consider low energy consumption as a critical objective to design WSNs. Several approaches can be taken to make efficient use of energy in WSNs, for instance low-duty-cycling sensor networks (LDC-WSN). Based on the LDC-WSNs, we present LOKA, a LOw power Konsumption Algorithm to minimize WSNs energy consumption using different power modes in a sensor mote. The contribution of the work is a novel algorithm called LOKA that implements two duty-cycling mechanisms using the end-device of the ZigBee protocol (of the Application Support Sublayer) and an external microcontroller (Cortex M0+) in order to minimize the energy consumption of a delay tolerant networking. Experiments show that using LOKA, the energy required by the sensor device is reduced to half with respect to the same sensor device without using LOKA.",
"title": ""
},
{
"docid": "79f87d478af99ef60efadb7c5ff7c4ec",
"text": "This study proposes an interior permanent magnet (IPM) brushless dc (BLDC) motor design strategy that utilizes BLDC control based on Hall sensor signals. The magnetic flux of IPM motors varies according to the rotor position and abnormal Hall sensor problems are related to magnetic flux. To find the cause of the abnormality in the Hall sensors, an analysis of the magnetic flux density at the Hall sensor position by finite element analysis is conducted. In addition, an IPM model with a notch structure is proposed to solve abnormal Hall sensor problems and its magnetic equivalent circuit (MEC) model is derived. Based on the MEC model, an optimal rotor design method is proposed and the final model is derived. However, the Hall sensor signal achieved from the optimal rotor is not perfect. To improve the accuracy of the BLDC motor control, a rotor position estimation method is proposed. Finally, experiments are performed to evaluate the performance of the proposed IPM-type BLDC motor and the Hall sensor compensation method.",
"title": ""
},
{
"docid": "d7a1985750fe10273c27f7f8121640ac",
"text": "The large volumes of data that will be produced by ubiquitous sensors and meters in future smart distribution networks represent an opportunity for the use of data analytics to extract valuable knowledge and, thus, improve Distribution Network Operator (DNO) planning and operation tasks. Indeed, applications ranging from outage management to detection of non-technical losses to asset management can potentially benefit from data analytics. However, despite all the benefits, each application presents DNOs with diverse data requirements and the need to define an adequate approach. Consequently, it is critical to understand the different interactions among applications, monitoring infrastructure and approaches involved in the use of data analytics in distribution networks. To assist DNOs in the decision making process, this work presents some of the potential applications where data analytics are likely to improve distribution network performance and the corresponding challenges involved in its implementation.",
"title": ""
},
{
"docid": "e3010b236d32ac0ba0909e0c054849ee",
"text": "We present a HMM based system for real-time gesture analysis. The system outputs continuously parameters relative to the gesture time progression and its likelihood. These parameters are computed by comparing the performed gesture with stored reference gestures. The method relies on a detailed modeling of multidimensional temporal curves. Compared to standard HMM systems, the learning procedure is simplified using prior knowledge allowing the system to use a single example for each class. Several applications have been developed using this system in the context of music education, music and dance performances and interactive installation. Typically, the estimation of the time progression allows for the synchronization of physical gestures to sound files by time stretching/compressing audio buffers or videos.",
"title": ""
},
{
"docid": "9311198676b2cc5ad31145c53c91134d",
"text": "A novel fractal called Fractal Clover Leaf (FCL) is introduced and shown to have well miniaturization capabilities. The proposed patches are fed by L-shape probe to achieve wide bandwidth operation in PCS band. A numerical parametric study on the proposed antenna is presented. It is found that the antenna can attain more than 72% size reduction as well as 17% impedance bandwidth (VSWR<2), in cost of less gain. It is also shown that impedance matching could be reached by tuning probe parameters. The proposed antenna is suitable for handset applications and tight packed planar phased arrays to achieve lower scan angels than rectangular patches.",
"title": ""
},
{
"docid": "4320278dcbf0446daf3d919c21606208",
"text": "The operation of different brain systems involved in different types of memory is described. One is a system in the primate orbitofrontal cortex and amygdala involved in representing rewards and punishers, and in learning stimulus-reinforcer associations. This system is involved in emotion and motivation. A second system in the temporal cortical visual areas is involved in learning invariant representations of objects. A third system in the hippocampus is implicated in episodic memory and in spatial function. Fourth, brain systems in the frontal and temporal cortices involved in short term memory are described. The approach taken provides insight into the neuronal operations that take place in each of these brain systems, and has the aim of leading to quantitative biologically plausible neuronal network models of how each of these memory systems actually operates.",
"title": ""
},
{
"docid": "7b496aac963284f3415ac98b3abd8165",
"text": "Forecasting is an important data analysis technique that aims to study historical data in order to explore and predict its future values. In fact, to forecast, different methods have been tested and applied from regression to neural network models. In this research, we proposed Elman Recurrent Neural Network (ERNN) to forecast the Mackey-Glass time series elements. Experimental results show that our scheme outperforms other state-of-art studies.",
"title": ""
},
{
"docid": "66fd7de53986e8c4a7ed08ed88f0b45b",
"text": "BACKGROUND\nConcerns regarding the risk of estrogen replacement have resulted in a significant increase in the use of soy products by menopausal women who, despite the lack of evidence of the efficacy of such products, seek alternatives to menopausal hormone therapy. Our goal was to determine the efficacy of soy isoflavone tablets in preventing bone loss and menopausal symptoms.\n\n\nMETHODS\nThe study design was a single-center, randomized, placebo-controlled, double-blind clinical trial conducted from July 1, 2004, through March 31, 2009. Women aged 45 to 60 years within 5 years of menopause and with a bone mineral density T score of -2.0 or higher in the lumbar spine or total hip were randomly assigned, in equal proportions, to receive daily soy isoflavone tablets, 200 mg, or placebo. The primary outcome was changes in bone mineral density in the lumbar spine, total hip, and femoral neck at the 2-year follow-up. Secondary outcomes included changes in menopausal symptoms, vaginal cytologic characteristics, N -telopeptide of type I bone collagen, lipids, and thyroid function.\n\n\nRESULTS\nAfter 2 years, no significant differences were found between the participants receiving soy tablets (n = 122) and those receiving placebo (n = 126) regarding changes in bone mineral density in the spine (-2.0% and -2.3%, respectively), the total hip (-1.2% and -1.4%, respectively), or the femoral neck (-2.2% and -2.1%, respectively). A significantly larger proportion of participants in the soy group experienced hot flashes and constipation compared with the control group. No significant differences were found between groups in other outcomes.\n\n\nCONCLUSIONS\nIn this population, the daily administration of tablets containing 200 mg of soy isoflavones for 2 years did not prevent bone loss or menopausal symptoms.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT00076050.",
"title": ""
},
{
"docid": "a39fb4e8c15878ba4fdac54f02451789",
"text": "The Cloud computing system can be easily threatened by various attacks, because most of the cloud computing systems provide service to so many people who are not proven to be trustworthy. Due to their distributed nature, cloud computing environment are easy targets for intruders[1]. There are various Intrusion Detection Systems having various specifications to each. Cloud computing have two approaches i. e. Knowledge-based IDS and Behavior-Based IDS to detect intrusions in cloud computing. Behavior-Based IDS assumes that an intrusion can be detected by observing a deviation from normal to expected behavior of the system or user[2]s. Knowledge-based IDS techniques apply knowledge",
"title": ""
},
{
"docid": "843cfe4948e412bc1f38fe55f436a851",
"text": "Longitudinal Online Research and Imaging System (LORIS) is a modular and extensible web-based data management system that integrates all aspects of a multi-center study: from heterogeneous data acquisition (imaging, clinical, behavior, and genetics) to storage, processing, and ultimately dissemination. It provides a secure, user-friendly, and streamlined platform to automate the flow of clinical trials and complex multi-center studies. A subject-centric internal organization allows researchers to capture and subsequently extract all information, longitudinal or cross-sectional, from any subset of the study cohort. Extensive error-checking and quality control procedures, security, data management, data querying, and administrative functions provide LORIS with a triple capability (1) continuous project coordination and monitoring of data acquisition (2) data storage/cleaning/querying, (3) interface with arbitrary external data processing \"pipelines.\" LORIS is a complete solution that has been thoroughly tested through a full 10 year life cycle of a multi-center longitudinal project and is now supporting numerous international neurodevelopment and neurodegeneration research projects.",
"title": ""
},
{
"docid": "a4cddba12bf99030fa02d986a453ad84",
"text": "QDT 2012 To obtain consistent esthetic outcomes, the design of dental restorations should be defined as soon as possible. The importance of gathering diagnostic data from questionnaires and checklists1–7 cannot be overlooked; however, much of this information may be lost if it is not transferred adequately to the design of the restorations. The diagnostic data must guide the subsequent treatment phases,8 integrating all of the patient’s needs, desires, and functional and biologic issues into an esthetic treatment design.9,10 The Digital Smile Design (DSD) is a multi-use conceptual tool that can strengthen diagnostic vision, improve communication, and enhance predictability throughout treatment. The DSD allows for careful analysis of the patient’s facial and dental characteristics along with any critical factors that may have been overlooked during clinical, photographic, or diagnostic cast–based evaluation procedures. The drawing of reference lines and shapes over extraand intraoral digital photographs in a predetermined sequence can widen diagnostic visualization and help the restorative team evaluate the limitations and risk factors of a given case, including asymmetries, disharmonies, and violations of esthetic principles.1 DSD sketches can be performed in presentation software such as Keynote (iWork, Apple, Cupertino, California, USA) or Microsoft PowerPoint (Microsoft Office, Microsoft, Redmond, Washington, USA). This improved visualization makes it easier to select the ideal restorative technique. The DSD protocol is characterized by effective communication between the interdisciplinary dental team, including the dental technician. Team members can identify and highlight discrepancies in soft or hard tissue morphology and discuss the best available solutions using the amplified images. Every team member can add information directly on the slides in writing or using voice-over, thus simplifying the process even more. All team members can access this information whenever necessary to review, alter, or add elements during the diagnostic and treatment phases. Digital Smile Design: A Tool for Treatment Planning and Communication in Esthetic Dentistry",
"title": ""
},
{
"docid": "83692fd5290c7c2a43809e1e2014566d",
"text": "Humans have a biological predisposition to form attachment to social partners, and they seem to form attachment even toward non-human and inanimate targets. Attachment styles influence not only interpersonal relationships, but interspecies and object attachment as well. We hypothesized that young people form attachment toward their mobile phone, and that people with higher attachment anxiety use the mobile phone more likely as a compensatory attachment target. We constructed a scale to observe people's attachment to their mobile and we assessed their interpersonal attachment style. In this exploratory study we found that young people readily develop attachment toward their phone: they seek the proximity of it and experience distress on separation. People's higher attachment anxiety predicted higher tendency to show attachment-like features regarding their mobile. Specifically, while the proximity of the phone proved to be equally important for people with different attachment styles, the constant contact with others through the phone was more important for anxiously attached people. We conclude that attachment to recently emerged artificial objects, like the mobile may be the result of cultural co-option of the attachment system. People with anxious attachment style may face challenges as the constant contact and validation the computer-mediated communication offers may deepen their dependence on others. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "58b3bc0fed5a5556591c11ae781e0cc1",
"text": "Understanding the solid biomechanics of the human body is important to the study of structure and function of the body, which can have a range of applications in health care, sport, well-being, and workflow analysis. Conventional laboratory-based biomechanical analysis systems and observation-based tests are designed only to capture brief snapshots of the mechanics of movement. With recent developments in wearable sensing technologies, biomechanical analysis can be conducted in less-constrained environments, thus allowing continuous monitoring and analysis beyond laboratory settings. In this paper, we review the current research in wearable sensing technologies for biomechanical analysis, focusing on sensing and analytics that enable continuous, long-term monitoring of kinematics and kinetics in a free-living environment. The main technical challenges, including measurement drift, external interferences, nonlinear sensor properties, sensor placement, and muscle variations, that can affect the accuracy and robustness of existing methods and different methods for reducing the impact of these sources of errors are described in this paper. Recent developments in motion estimation in kinematics, mobile force sensing in kinematics, sensor reduction for electromyography, and the future direction of sensing for biomechanics are also discussed.",
"title": ""
},
{
"docid": "97ac64bb4d06216253eacb17abfcb103",
"text": "UIMA Ruta is a rule-based system designed for information extraction tasks, but it is also applicable for many natural language processing use cases. This demonstration gives an overview of the UIMA Ruta Workbench, which provides a development environment and tooling for the rule language. It was developed to ease every step in engineering rule-based applications. In addition to the full-featured rule editor, the user is supported by explanation of the rule execution, introspection in results, automatic validation and rule induction. Furthermore, the demonstration covers the usage and combination of arbitrary components for natural language processing.",
"title": ""
}
] |
scidocsrr
|
351883cba720b43f0d630484c5258420
|
Active scene recognition with vision and language
|
[
{
"docid": "b5347e195b44d5ae6d4674c685398fa3",
"text": "The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensional image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory.",
"title": ""
},
{
"docid": "cf7c5ae92a0514808232e4e9d006024a",
"text": "We present an interactive, hybrid human-computer method for object classification. The method applies to classes of objects that are recognizable by people with appropriate expertise (e.g., animal species or airplane model), but not (in general) by people without such expertise. It can be seen as a visual version of the 20 questions game, where questions based on simple visual attributes are posed interactively. The goal is to identify the true class while minimizing the number of questions asked, using the visual content of the image. We introduce a general framework for incorporating almost any off-the-shelf multi-class object recognition algorithm into the visual 20 questions game, and provide methodologies to account for imperfect user responses and unreliable computer vision algorithms. We evaluate our methods on Birds-200, a difficult dataset of 200 tightly-related bird species, and on the Animals With Attributes dataset. Our results demonstrate that incorporating user input drives up recognition accuracy to levels that are good enough for practical applications, while at the same time, computer vision reduces the amount of human interaction required.",
"title": ""
}
] |
[
{
"docid": "7f245ee8064620520dfbd28da28bbdf1",
"text": "iI31 --; “The accuracy of radar angle measurements,” Hughes Aircraft Comoanv. Tech. Reo. IDC 2764/63. Auz. 13. 1973. -; “Monopulse accuracy-scintillatingvtargets,” Hughes Aircraft Company, Tech. Rep. IDC 2241/S, Feb. II, 1974. R. A. Dibos, Nonlinear mapping of MLE angle information,” Hughes Aircraft Company, Tech. Rep. IDC 23 12.00/l 84, Mar. 1978. E. M. Hofstetter and D. F. Delong, Jr., “Detection and parameter estimation in an amplitude comparison monopulse radar,” IEEE Truns. Inform. Theory, vol. IT-15, pp 22-30, Jan. 1969. A. M. Kagan and V. V. Pits, “An invariant detection of signal by a monopulse radar receiver,” Radio Eng. Electron. Phys., vol. 21, no. 11, pp 42-46, 1976. A. K. Zhuravlev and N. A. Suslov, “Statistical characteristics of signals at the output of the receiving equipment of a monopulse aoniometer.” Rodio Ena. Electron. Phvs.. vol. 14. no. 12. vv 1945i948, 1969. __",
"title": ""
},
{
"docid": "b581717dca731a6fd216d8d4d9530b9c",
"text": "In the last few years, there has been increasing interest from the agent community in the use of techniques from decision theory and game theory. Our aims in this article are firstly to briefly summarize the key concepts of decision theory and game theory, secondly to discuss how these tools are being applied in agent systems research, and finally to introduce this special issue of Autonomous Agents and Multi-Agent Systems by reviewing the papers that appear.",
"title": ""
},
{
"docid": "751689427492a952a5b1238c62f45db4",
"text": "This work concerns the behavior study of a MPPT algorithm based on the incremental conductance. An open loop analysis of a photovoltaic chain, modeled in matlab-simulink, allows to extract the maximum power of the photovoltaic panel. The principle is based on the determination instantaneously of the conductance and its tendency materialized by a signed increment. A buck step down converter is used to adapt the voltage to its appropriate value to reach a maximal power extraction. This novel analysis method applied to the photovoltaic system is made under different atmospheric parameters. The performances are shown using Matlab/Simulink software.",
"title": ""
},
{
"docid": "a2c1d20fe84f24f5fcfc7aa5783b9a40",
"text": "BACKGROUND\nA self-report screening scale of adult attention-deficit/hyperactivity disorder (ADHD), the World Health Organization (WHO) Adult ADHD Self-Report Scale (ASRS) was developed in conjunction with revision of the WHO Composite International Diagnostic Interview (CIDI). The current report presents data on concordance of the ASRS and of a short-form ASRS screener with blind clinical diagnoses in a community sample.\n\n\nMETHOD\nThe ASRS includes 18 questions about frequency of recent DSM-IV Criterion A symptoms of adult ADHD. The ASRS screener consists of six out of these 18 questions that were selected based on stepwise logistic regression to optimize concordance with the clinical classification. ASRS responses were compared to blind clinical ratings of DSM-IV adult ADHD in a sample of 154 respondents who previously participated in the US National Comorbidity Survey Replication (NCS-R), oversampling those who reported childhood ADHD and adult persistence.\n\n\nRESULTS\nEach ASRS symptom measure was significantly related to the comparable clinical symptom rating, but varied substantially in concordance (Cohen's kappa in the range 0.16-0.81). Optimal scoring to predict clinical syndrome classifications was to sum unweighted dichotomous responses across all 18 ASRS questions. However, because of the wide variation in symptom-level concordance, the unweighted six-question ASRS screener outperformed the unweighted 18-question ASRS in sensitivity (68.7% v. 56.3%), specificity (99.5% v. 98.3%), total classification accuracy (97.9% v. 96.2%), and kappa (0.76 v. 0.58).\n\n\nCONCLUSIONS\nClinical calibration in larger samples might show that a weighted version of the 18-question ASRS outperforms the six-question ASRS screener. Until that time, however, the unweighted screener should be preferred to the full ASRS, both in community surveys and in clinical outreach and case-finding initiatives.",
"title": ""
},
{
"docid": "349e0be3b956c038c0f34e3b4d7d4894",
"text": "The integrity of RNA molecules is of paramount importance for experiments that try to reflect the snapshot of gene expression at the moment of RNA extraction. Until recently, there has been no reliable standard for estimating the integrity of RNA samples and the ratio of 28S:18S ribosomal RNA, the common measure for this purpose, has been shown to be inconsistent. The advent of microcapillary electrophoretic RNA separation provides the basis for an automated high-throughput approach, in order to estimate the integrity of RNA samples in an unambiguous way. A method is introduced that automatically selects features from signal measurements and constructs regression models based on a Bayesian learning technique. Feature spaces of different dimensionality are compared in the Bayesian framework, which allows selecting a final feature combination corresponding to models with high posterior probability. This approach is applied to a large collection of electrophoretic RNA measurements recorded with an Agilent 2100 bioanalyzer to extract an algorithm that describes RNA integrity. The resulting algorithm is a user-independent, automated and reliable procedure for standardization of RNA quality control that allows the calculation of an RNA integrity number (RIN). Our results show the importance of taking characteristics of several regions of the recorded electropherogram into account in order to get a robust and reliable prediction of RNA integrity, especially if compared to traditional methods.",
"title": ""
},
{
"docid": "8e39f24715fa289df42e1e60910a4bdd",
"text": "BACKGROUND\nThis review's goal was to determine how differences between physicians and patients in race, ethnicity and language influence the quality of the physician-patient relationship.\n\n\nMETHODS\nWe performed a literature review to assess existing evidence for ethnic and racial disparities in the quality of doctor-patient communication and the doctor-patient relationship.\n\n\nRESULTS\nWe found consistent evidence that race, ethnicity; and language have substantial influence on the quality of the doctor-patient relationship. Minority patients, especially those not proficient in English, are less likely to engender empathic response from physicians, establish rapport with physicians, receive sufficient information, and be encouraged to participate in medical decision making.\n\n\nCONCLUSIONS\nThe literature calls for a more diverse physician work force since minority patients are more likely to choose minority physicians, to be more satisfied by language-concordant relationships, and to feel more connected and involved in decision making with racially concordant physicians. The literature upholds the recommendation for professional interpreters to bridge the gaps in access experienced by non-English speaking physicians. Further evidence supports the admonition that \"majority\" physicians need to be more effective in developing relationships and in their communication with ethnic and racial minority patients.",
"title": ""
},
{
"docid": "1f3985e9c8bbad7279ee7ebfda74a8a8",
"text": "Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of image classifiers? In this paper, we show empirically that adversarial examples mainly lie in the low probability regions of the training distribution, regardless of attack types and targeted models. Using statistical hypothesis testing, we find that modern neural density models are surprisingly good at detecting imperceptible image perturbations. Based on this discovery, we devised PixelDefend, a new approach that purifies a maliciously perturbed image by moving it back towards the distribution seen in the training data. The purified image is then run through an unmodified classifier, making our method agnostic to both the classifier and the attacking method. As a result, PixelDefend can be used to protect already deployed models and be combined with other model-specific defenses. Experiments show that our method greatly improves resilience across a wide variety of state-of-the-art attacking methods, increasing accuracy on the strongest attack from 63% to 84% for Fashion MNIST and from 32% to 70% for CIFAR-10.",
"title": ""
},
{
"docid": "7f1625c0d1ed39245c77db9cd3ca2bd7",
"text": "We address the computational problem of novel human pose synthesis. Given an image of a person and a desired pose, we produce a depiction of that person in that pose, retaining the appearance of both the person and background. We present a modular generative neural network that synthesizes unseen poses using training pairs of images and poses taken from human action videos. Our network separates a scene into different body part and background layers, moves body parts to new locations and refines their appearances, and composites the new foreground with a hole-filled background. These subtasks, implemented with separate modules, are trained jointly using only a single target image as a supervised label. We use an adversarial discriminator to force our network to synthesize realistic details conditioned on pose. We demonstrate image synthesis results on three action classes: golf, yoga/workouts and tennis, and show that our method produces accurate results within action classes as well as across action classes. Given a sequence of desired poses, we also produce coherent videos of actions.",
"title": ""
},
{
"docid": "1ec4415f1ff6dd2da304cba01e4d6e0c",
"text": "In disruption-tolerant networks (DTNs), network topology constantly changes and end-to-end paths can hardly be sustained. However, social network properties are observed in many DTNs and tend to be stable over time. To utilize the social network properties to facilitate packet forwarding, we present LocalCom, a community-based epidemic forwarding scheme that efficiently detects the community structure using limited local information and improves the forwarding efficiency based on the community structure. We define similarity metrics according to nodes’ encounter history to depict the neighboring relationship between each pair of nodes. A distributed algorithm which only utilizes local information is then applied to detect communities, and the formed communities have strong intra-community connections. We also present two schemes to mark and prune gateways that connect communities to control redundancy and facilitate inter-community packet forwarding. Extensive real-trace-driven simulation results are presented to support the effectiveness of our scheme.",
"title": ""
},
{
"docid": "3bc7adca896ab0c18fd8ec9b8c5b3911",
"text": "Traditional algorithms to design hand-crafted features for action recognition have been a hot research area in last decade. Compared to RGB video, depth sequence is more insensitive to lighting changes and more discriminative due to its capability to catch geometric information of object. Unlike many existing methods for action recognition which depend on well-designed features, this paper studies deep learning-based action recognition using depth sequences and the corresponding skeleton joint information. Firstly, we construct a 3Dbased Deep Convolutional Neural Network (3DCNN) to directly learn spatiotemporal features from raw depth sequences, then compute a joint based feature vector named JointVector for each sequence by taking into account the simple position and angle information between skeleton joints. Finally, support vector machine (SVM) classification results from 3DCNN learned features and JointVector are fused to take action recognition. Experimental results demonstrate that our method can learn feature representation which is time-invariant and viewpoint-invariant from depth sequences. The proposed method achieves comparable results to the state-of-the-art methods on the UTKinect-Action3D dataset and achieves superior performance in comparison to baseline methods on the MSR-Action3D dataset. We further investigate the generalization of the trained model by transferring the learned features from one dataset (MSREmail addresses: [email protected] (Zhi Liu), [email protected] (Chenyang Zhang), [email protected] (Yingli Tian) Preprint submitted to Image and Vision Computing April 11, 2016 Action3D) to another dataset (UTKinect-Action3D) without retraining and obtain very promising classification accuracy.",
"title": ""
},
{
"docid": "7531be3af1285a4c1c0b752d1ee45f52",
"text": "Given an undirected graph with weight for each vertex, the maximum weight clique problem is to find the clique of the maximum weight. Östergård proposed a fast exact algorithm for solving this problem. We show his algorithm is not efficient for very dense graphs. We propose an exact algorithm for the problem, which is faster than Östergård’s algorithm in case the graph is dense. We show the efficiency of our algorithm with some experimental results.",
"title": ""
},
{
"docid": "33397c974c1445aa941c43456e8ef01f",
"text": "Convolutional neural networks (CNN) have achieved the top performance for event detection due to their capacity to induce the underlying structures of the k-grams in the sentences. However, the current CNN-based event detectors only model the consecutive k-grams and ignore the non-consecutive kgrams that might involve important structures for event detection. In this work, we propose to improve the current CNN models for ED by introducing the non-consecutive convolution. Our systematic evaluation on both the general setting and the domain adaptation setting demonstrates the effectiveness of the nonconsecutive CNN model, leading to the significant performance improvement over the current state-of-the-art systems.",
"title": ""
},
{
"docid": "f442fa8d061e32891f486a14c3a76748",
"text": "We compare and discuss various approaches to the problem of part of speech (POS) tagging of texts written in Kazakh, an agglutinative and highly inflectional Turkic language. In Kazakh a single root may produce hundreds of word forms, and it is difficult, if at all possible, to label enough training data to account for a vast set of all possible word forms in the language. Thus, current state of the art statistical POS taggers may not be as effective for Kazakh as for morphologically less complex languages, e.g. English. Also the choice of a POS tag set may influence the informativeness and the accuracy of tagging.",
"title": ""
},
{
"docid": "17066d168d60a8eadc28587850b98723",
"text": "We marry two powerful ideas: deep representation learning for visual recognition and language understanding, and symbolic program execution for reasoning. Our neural-symbolic visual question answering (NS-VQA) system first recovers a structural scene representation from the image and a program trace from the question. It then executes the program on the scene representation to obtain an answer. Incorporating symbolic structure as prior knowledge offers three unique advantages. First, executing programs on a symbolic space is more robust to long program traces; our model can solve complex reasoning tasks better, achieving an accuracy of 99.8% on the CLEVR dataset. Second, the model is more dataand memory-efficient: it performs well after learning on a small number of training data; it can also encode an image into a compact representation, requiring less storage than existing methods for offline question answering. Third, symbolic program execution offers full transparency to the reasoning process; we are thus able to interpret and diagnose each execution step.",
"title": ""
},
{
"docid": "08dab42f86183ffcdcca88735525bddd",
"text": "Do GANS (Generative Adversarial Nets) actually learn the target distribution? The foundational paper of (Goodfellow et al 2014) suggested they do, if they were given “sufficiently large” deep nets, sample size, and computation time. A recent theoretical analysis in Arora et al (to appear at ICML 2017) raised doubts whether the same holds when discriminator has finite size. It showed that the training objective can approach its optimum value even if the generated distribution has very low support —in other words, the training objective is unable to prevent mode collapse. The current note reports experiments suggesting that such problems are not merely theoretical. It presents empirical evidence that well-known GANs approaches do learn distributions of fairly low support, and thus presumably are not learning the target distribution. The main technical contribution is a new proposed test, based upon the famous birthday paradox, for estimating the support size of the generated distribution.",
"title": ""
},
{
"docid": "95d8b83eadde6d6da202341c0b9238c8",
"text": "Numerous studies have demonstrated that water-based compost preparations, referred to as compost tea and compost-water extract, can suppress phytopathogens and plant diseases. Despite its potential, compost tea has generally been considered as inadequate for use as a biocontrol agent in conventional cropping systems but important to organic producers who have limited disease control options. The major impediments to the use of compost tea have been the lessthan-desirable and inconsistent levels of plant disease suppression as influenced by compost tea production and application factors including compost source and maturity, brewing time and aeration, dilution and application rate and application frequency. Although the mechanisms involved in disease suppression are not fully understood, sterilization of compost tea has generally resulted in a loss in disease suppressiveness. This indicates that the mechanisms of suppression are often, or predominantly, biological, although physico-chemical factors have also been implicated. Increasing the use of molecular approaches, such as metagenomics, metaproteomics, metatranscriptomics and metaproteogenomics should prove useful in better understanding the relationships between microbial abundance, diversity, functions and disease suppressive efficacy of compost tea. Such investigations are crucial in developing protocols for optimizing the compost tea production process so as to maximize disease suppressive effect without exposing the manufacturer or user to the risk of human pathogens. To this end, it is recommended that compost tea be used as part of an integrated disease management system.",
"title": ""
},
{
"docid": "083989d115f6942b362c06936b2775ea",
"text": "In humans, nearly two meters of genomic material must be folded to fit inside each micrometer-scale cell nucleus while remaining accessible for gene transcription, DNA replication, and DNA repair. This fact highlights the need for mechanisms governing genome organization during any activity and to maintain the physical organization of chromosomes at all times. Insight into the functions and three-dimensional structures of genomes comes mostly from the application of visual techniques such as fluorescence in situ hybridization (FISH) and molecular approaches including chromosome conformation capture (3C) technologies. Recent developments in both types of approaches now offer the possibility of exploring the folded state of an entire genome and maybe even the identification of how complex molecular machines govern its shape. In this review, we present key methodologies used to study genome organization and discuss what they reveal about chromosome conformation as it relates to transcription regulation across genomic scales in mammals.",
"title": ""
},
{
"docid": "5900299f078030bbad5872750b1e5eeb",
"text": "Penile Mondor’s Disease (Superficial thrombophlebitis of the dorsal vein of the penis) is a rare and important disease that every clinician should be able to diagnose, which present with pain and in duration of the dorsal part of the penis. The various possible causes are trauma, excessive sexual activity neoplasms,, or abstinence. Diagnosis is mainly based on history and physical examination. Though diagnosis is mainly based on history and physical examination, Doppler ultrasound is considered as the imaging modality of choice. Sclerotizing lymphangitis and Peyronies disease must be considered in differential diagnosis. Accurate diagnosis and Propercounseling can help to relieve the anxiety experienced by the patients regarding this benign disease. We are describing the symptoms, diagnosis, and treatment of the superficial thrombophlebitis of the dorsal vein of the penis.",
"title": ""
},
{
"docid": "2560c024f27ceec6dd055316ddf84d4f",
"text": "A model is developed which implies that if an analyst has high reputation or low ability, or if there is strong public information that is inconsistent with the analyst's private information, she is likely to herd. Herding is also common when informative private signals are positively correlated across analysts. The model is tested using data from analysts who publish investment newsletters. Consistent with the model's implications, the empirical results indicate that a newsletter analyst is likely to herd on Value Line's recommendation if her reputation is high, if her ability is low, or if signal correlation is high. HERDBEHAVIOR IS OFTEN SAID TO OCCUR when many people take the same action, perhaps because some mimic the actions of others. Herding has been theoretically linked to many economic activities, such as investment recommendations (Scharfstein and Stein (1990)), price behavior of IPOs (Welch (1992)),fads and customs (Bikhchandani, Hirshleifer, and Welch (1992)),earnings forecasts (Trueman (1994)), corporate conservatism (Zwiebel (1995)), and delegated portfolio management (Maug and Naik (1995)). This paper adds to the herding literature by developing and empirically testing a model that examines the incentives investment advisors face when deciding whether to herd. In particular, the paper tests whether economic conditions and agents' individual characteristics affect their likelihood of herding. The results are interpreted as a test of the predictions of the general class of cascade and herding models.1 * Fuqua School of Business, Duke University. I am grateful to David Hirshleifer and Jaime Zender for comments that helped to substantially improve the paper. I would also like to thank Pete Kyle,Alon Brav, Doug Foster, Dan Graham, Rita Graham, Paul Harrison, Eric Hughson, Ron Lease, Mike Lemmon, Ernst Maug, Susan Monaco, Carl Moody, Barb Ostdiek, Drew Roper, Steve Slezak, Ren6 Stulz, Tom Smith, Brett Trueman, Vish Viswanathan, anonymous referees, and seminar participants at Duke, Tulane, and the University of Utah for helpful comments. I am grateful to Mark Hulbert and The Hulbert Financial Digest for providing the newsletter data, to David Hsieh for providing the daily S&P500 index volatility estimates, and to Yunqi Han and the Federal Reserve Bank of Philadelphia for providing the data on Treasury bill forecasts. I am responsible for all remaining errors. The theoretical part of the paper was a chapter of my doctoral dissertation at Duke University. The empirical work was started while I was at the University of Utah. Welch (1996) also tests implications from the general class of herding models. He finds that brokerage recommendations are influenced by the consensus opinion of many brokers, especially in bullish market conditions or when the consensus proves to be wrong. He interprets the latter condition as being consistent with the implications from models that show that herding is sometimes based on little or no information (e.g., Scharfstein and Stein (1990) or Bikhchandani et al. (1992)). The Journal of Finance We investigate the herding phenomenon using a simple model of stock analysts, patterned after the model in Scharfstein and Stein (1990). Each analyst in our model is one of two types, smart or dumb, although the type is unobservable to all. Smart analysts receive informative private signals about the stock market's expected return, dumb analysts receive uninformative signals. The smart analysts' signals are positively cross-correlated, implying that smart analysts following their private information have a tendency to act similarly. Consequently, in certain circumstances, an analyst can \"look smart\" by herding. The analysts in the model act sequentially. The theoretical part of the paper investigates several factors that provide incentives for the second-mover to discard her private information and instead mimic the action of the first-mover. The analysts use Bayes' rule to determine their optimal actions and so prior public information is an important input in their decision-making processes, as is the precision of their private information (which we interpret as ability). The amount of correlation across informative private signals is also instrumental because it affects the degree to which analysts can look smart by herding. Finally, given that analysts maximize expected posterior reputation, their prior reputations also influence their optimal decisions. After documenting the existence of parameter regions associated with \"herding\" and \"deviating\" equilibria, comparative statics are used to show that the incentive for the second-mover to discard her private information and instead mimic the market leader 1. increases with her initial reputation 2. decreases with her ability 3. increases in the strength of prior public information that is consistent with the leader's action 4. increases with the level of correlation across informative signals. Though these factors are obviously interrelated, it is instructive to isolate the individual contribution of each to herding behavior, rather than blurring the distinction among them, as is often done.2 The intuition behind the reputation implication is that analysts with high reputation (and salary) herd to protect their current status and level of pay? For example, Institutional Investor's All-American Research Team is made up of high reputation analysts. Stickel (1990, 1992) shows that All-Americans give more accurate earnings forecasts and \"follow the crowd\" less often than non-All-Americans. Based on these findings, it appears that having a high reputation reduces the incentive to herd. In contrast, our model indicates that, to preserve status and salary, high reputation All-Americans have greater incentive to herd than non-All-Americans of equal ability. This implication may seem to contradict Stickel's (1990) finding that All-Americans \"follow the crowd\" less; however, his results reflect the net effect of reputation, ability, and other factors. We can isolate the effect of reputation on herding only by controlling for the other factors. This is consistent with the implication in Prendergast and Stole (1996) that \"youngsters\" exaggerate private information to look knowledgeable, while \"old-timers\" make more conservative decisions. However, their prediction arises because old-timers do not want to deviate too far from their own past decisions, while our model predicts that agents herd on a leader's current decision to remain part of the crowd. 239 Herding among Investment Newsletters We test the implications of the theoretical model with a sample of investment newsletter asset allocation recommendations. A typical newsletter contains four to eight pages of analysis of current economic trends, combined with the newsletter editor's interpretation of how the trends affect various investment strategies. Though the mode and frequency of information transfer varies widely, the typical newsletter is published monthly and mailed to subscribers for an annual fee of approximately $200; some letters also have a telephone, Internet, or fax updating service. The best known investment newsletter is the Value Line Investment Survey. Our sample consists of the market timing advice (i.e., recommendations about what portion of an investor's wealth should be invested in the stock market, cash, etc.) offered by 237 newsletter strategies over the period 1980 to 1992. Using these data, we identify the attributes of newsletters that herd on the advice of Value Line. Our strongest empirical finding is that herding decreases with the precision of private information, which lends support to the broad class of cascade and herding models. We also find evidence supporting the predictions that the incidence of mimicking Value Line increases with newsletter reputation, when a proxy for private information is highly correlated across analysts, and when prior information is strong. The herding literature can be subdivided in the following manner, although these categories are neither exhaustive nor mutually exclusive: (1)informational cascades, (2) reputational herding, (3) investigative herding, and (4) empirical herding. (For a general review of the herding literature, see Devenow and Welch (1996).) The first two types of herding occur when individuals choose to ignore or downplay their private information and instead jump on the bandwagon by mimicking the actions of individuals who acted previously. Informational cascades occur when the existing aggregate information becomes so overwhelming that an individual's single piece of private information is not strong enough to reverse the decision of the crowd. Therefore, the individual chooses to mimic the action of the crowd, rather than act on his private information. If this scenario holds for one individual, then it likely also holds for anyone acting after this person. This domino-like effect is often referred to as a cascade. Research by Welch (1992), Bikhchandani et al. (1992), Banerjee (1992), Lee (1993), Smith and Sorensen (1994), Khanna and Slezak (1998), Banerjee and Fudenberg (1995), and Brandenburger and Polak (1996) investigates cascades. Like cascades, reputational herding takes place when an agent chooses to ignore her private information and mimic the action of another agent who has acted previously. However, reputational herding models have an additional layer of mimicking resulting from positive reputational externalities that can be obtained by acting as part of a group or choosing a certain project. Our theoretical model falls in the reputational herding category. Other reputational herding models include Scharfstein and Stein (1990), Trueman (1994), Zwiebel (1995), Huddart (1996), and Prendergast and Stole (1996). Because these papers deal with issues similar to those investigated by our paper, they are discussed in detail in later sections. 240 The Journal of Finance Investigative herding occurs whe",
"title": ""
},
{
"docid": "3beb7efa16f95eaf33c119b244b25a70",
"text": "The framework reconstruction of the nose is a significant and complex component of its partial or total reconstruction. On the one hand, the design of the individual framework parts is based on the anatomic nature of available rib or ear cartilage, which must on the other hand be adapted to the anatomic characteristics of the defect. The framework parts must be anchored not only to each other but also stably to the facial skeleton. The symmetry of the framework reconstruction is an essential component of the aesthetics of the reconstructed nose. If these points are already considered in planning, the reconstruction of the nasal framework can be standardized insofar as the same principles for the basic design of the individual parts as well as stable solutions for the anchoring points can be chosen. With reproducible techniques, functionally and aesthetically good to very good results can be achieved, including in the long term. The surgeon must possess special skills in the field of nasal reconstruction to correctly choose, apply, and combine the various techniques of nasal framework reconstruction.",
"title": ""
}
] |
scidocsrr
|
1fcb563fa4360204a8f72d13b3fff288
|
Droplet-trace-based array partitioning and a pin assignment algorithm for the automated design of digital microfluidic biochips
|
[
{
"docid": "9824a6ec0809cefdec77a52170670d17",
"text": "The use of planar fluidic devices for performing small-volume chemistry was first proposed by analytical chemists, who coined the term “miniaturized total chemical analysis systems” ( TAS) for this concept. More recently, the TAS field has begun to encompass other areas of chemistry and biology. To reflect this expanded scope, the broader terms “microfluidics” and “lab-on-a-chip” are now often used in addition to TAS. Most microfluidics researchers rely on micromachining technologies at least to some extent to produce microflow systems based on interconnected micrometer-dimensioned channels. As members of the microelectromechanical systems (MEMS) community know, however, one can do more with these techniques. It is possible to impart higher levels of functionality by making features in different materials and at different levels within a microfluidic device. Increasingly, researchers have considered how to integrate electrical or electrochemical function into chips for purposes as diverse as heating, temperature sensing, electrochemical detection, and pumping. MEMS processes applied to new materials have also resulted in new approaches for fabrication of microchannels. This review paper explores these and other developments that have emerged from the increasing interaction between the MEMS and microfluidics worlds.",
"title": ""
}
] |
[
{
"docid": "7f19a1aa06bb21443992cb5283636d9f",
"text": "Traceability is important in the food supply chain to ensure the consumerspsila food safety, especially for the fresh products. In recent years, many solutions which applied various emerging technology have been proposed to improve the traceability of fresh product. However, the traceability system needs to be customized to satisfy different requirements. The system depends on the different product properties and supply chain models. This paper proposed a RFID-enabled traceability system for live fish supply chain. The system architecture is designed according to the specific requirement gathered in the life fish processing. Likewise, it is adaptive for the small and medium enterprises. The RFID tag is put on each live fish and is regarded as the mediator which links the live fish logistic center, retail restaurants and consumers for identification. The sensors controlled by the PLC are used to collect the information in farming as well as the automatic transporting processes. The traceability information is designed to be exchanged and used on a Web-based system for farmers and consumers. The system was implemented and deployed in the live fish logistic center for trial, and the results are valuable for practical reference.",
"title": ""
},
{
"docid": "6b16bc1aeb9ad7bc25bf2154c534d5dc",
"text": "Neighbor Discovery for IP Version 6 (IPv6) | <draft-ietf-ipngwg-discovery-v2-01.txt> | Status of this Memo This document is an Internet-Draft. Internet-Drafts are working * documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as ''work in progress.'' To learn the current status of any Internet-Draft, please check the ''1id-abstracts.txt'' listing contained in the Internet-Drafts Shadow Directories on ds.internic.net (US East Coast), nic.nordu.net Abstract This document specifies the Neighbor Discovery protocol for IP * Version 6. IPv6 nodes on the same link use Neighbor Discovery to discover each other's presence, to determine each other's link-layer addresses, to find routers and to maintain reachability information about the paths to active neighbors. * draft-ietf-ipngwg-discovery-v2-01.txt [Page 1]",
"title": ""
},
{
"docid": "bee18c0e11ec5db199861ef74b06bfe1",
"text": "Financial time series are complex, non-stationary and deterministically chaotic. Technical indicators are used with principal component analysis (PCA) in order to identify the most influential inputs in the context of the forecasting model. Neural networks (NN) and support vector regression (SVR) are used with different inputs. Our assumption is that the future value of a stock price depends on the financial indicators although there is no parametric model to explain this relationship. This relationship comes from technical analysis. Comparison shows that SVR and MLP networks require different inputs. The MLP networks outperform the SVR technique.",
"title": ""
},
{
"docid": "b2ff879a41647b978118aacbcf9a2108",
"text": "In this paper we present two new variable step size (VSS) methods for adaptive filters. These VSS methods are so effective, they eliminate the need for a separate double-talk detection algorithm in echo cancellation applications. The key feature of both approaches is the introduction of a new near-end signal energy estimator (NESEE) that provides accurate and computationally efficient estimates even during double-talk and echo path change events. The first VSS algorithm applies the NESEE to the recently proposed Nonparametric VSS NLMS (NPVSS-NLMS) algorithm. The resulting algorithm has excellent convergence characteristics with an intrinsic immunity to double-talk. The second approach is somewhat more ad hoc. It is composed of a combination of an efficient echo path change detector and the NESEE. This VSS method also has excellent convergence, double talk immunity, and computational efficiency. Simulations demonstrate the efficacy of both proposed algorithms.",
"title": ""
},
{
"docid": "6ec924cd7c2bb4c2694c2b0562c43241",
"text": "3D sketch-based 3D model retrieval is to retrieve similar 3D models using users' hand-drawn 3D sketches as input. Compared with traditional 2D sketch-based retrieval, 3D sketch-based 3D model retrieval is a brand new and challenging research topic. In this paper, we employ advanced deep learning method and propose a novel 3D sketch based 3D model retrieval system. Our system has been comprehensively tested on two benchmark datasets and compared with other existing 3D model retrieval algorithms. The experimental results reveal our approach outperforms other competing state-of-the-arts and demonstrate promising potential of our approach on 3D sketch based applications.",
"title": ""
},
{
"docid": "4357e361fd35bcbc5d6a7c195a87bad1",
"text": "In an age of increasing technology, the possibility that typing on a keyboard will replace handwriting raises questions about the future usefulness of handwriting skills. Here we present evidence that brain activation during letter perception is influenced in different, important ways by previous handwriting of letters versus previous typing or tracing of those same letters. Preliterate, five-year old children printed, typed, or traced letters and shapes, then were shown images of these stimuli while undergoing functional MRI scanning. A previously documented \"reading circuit\" was recruited during letter perception only after handwriting-not after typing or tracing experience. These findings demonstrate that handwriting is important for the early recruitment in letter processing of brain regions known to underlie successful reading. Handwriting therefore may facilitate reading acquisition in young children.",
"title": ""
},
{
"docid": "e8a1330f93a701939367bd390e9018c7",
"text": "An eccentric paddle locomotion mechanism based on the epicyclic gear mechanism (ePaddle-EGM), which was proposed to enhance the mobility of amphibious robots in multiterrain tasks, can perform various terrestrial and aquatic gaits. Two of the feasible aquatic gaits are the rotational paddling gait and the oscillating paddling gait. The former one has been studied in our previous work, and a capacity of generating vectored thrust has been found. In this letter, we focus on the oscillating paddling gait by measuring the generated thrusts of the gait on an ePaddle-EGM prototype module. Experimental results verify that the oscillating paddling gait can generate vectored thrust by changing the location of the paddle shaft as well. Furthermore, we compare the oscillating paddling gait with the rotational paddling gait at the vectored thrusting property, magnitude of the thrust, and the gait efficiency.",
"title": ""
},
{
"docid": "4ea07335d42a859768565c8d88cd5280",
"text": "This paper brings together research from two different fields – user modelling and web ontologies – in attempt to demonstrate how recent semantic trends in web development can be combined with the modern technologies of user modelling. Over the last several years, a number of user-adaptive systems have been exploiting ontologies for the purposes of semantics representation, automatic knowledge acquisition, domain and user model visualisation and creation of interoperable and reusable architectural solutions. Before discussing these projects, we first overview the underlying user modelling and ontological technologies. As an example of the project employing ontology-based user modelling, we present an experiment design for translation of overlay student models for relative domains by means of ontology mapping.",
"title": ""
},
{
"docid": "3dc2710350110d846a744a73cf37560f",
"text": "The increase in awareness of people toward their nutritional habits has drawn considerable attention to the field of automatic food analysis. Focusing on self-service restaurants environment, automatic food analysis is not only useful for extracting nutritional information from foods selected by customers, it is also of high interest to speed up the service solving the bottleneck produced at the cashiers in times of high demand. In this paper, we address the problem of automatic food tray analysis in canteens and restaurants environment, which consists in predicting multiple foods placed on a tray image. We propose a new approach for food analysis based on convolutional neural networks, we name Semantic Food Detection, which integrates in the same framework food localization, recognition and segmentation. We demonstrate that our method improves the state-of-art food detection by a considerable margin on the public dataset UNIMIB2016, achieving about 90% in terms of F-measure, and thus provides a significant technological advance toward the automatic billing in restaurant environments.",
"title": ""
},
{
"docid": "67db336c7de0cff2df34e265a219e838",
"text": "Machine reading aims to automatically extract knowledge from text. It is a long-standing goal of AI and holds the promise of revolutionizing Web search and other fields. In this paper, we analyze the core challenges of machine reading and show that statistical relational AI is particularly well suited to address these challenges. We then propose a unifying approach to machine reading in which statistical relational AI plays a central role. Finally, we demonstrate the promise of this approach by presenting OntoUSP, an end-toend machine reading system that builds on recent advances in statistical relational AI and greatly outperforms state-of-theart systems in a task of extracting knowledge from biomedical abstracts and answering questions.",
"title": ""
},
{
"docid": "855b80a4dd22e841c8a929b20eb6e002",
"text": "Accuracy and stability of Kinect-like depth data is limited by its generating principle. In order to serve further applications with high quality depth, the preprocessing on depth data is essential. In this paper, we analyze the characteristics of the Kinect-like depth data by examing its generation principle and propose a spatial-temporal denoising algorithm taking into account its special properties. Both the intra-frame spatial correlation and the inter-frame temporal correlation are exploited to fill the depth hole and suppress the depth noise. Moreover, a divisive normalization approach is proposed to assist the noise filtering process. The 3D rendering results of the processed depth demonstrates that the lost depth is recovered in some hole regions and the noise is suppressed with depth features preserved.",
"title": ""
},
{
"docid": "564675e793834758bd66e440b65be206",
"text": "While it is still most common for information visualization researchers to develop new visualizations from a data-or taskdriven perspective, there is growing interest in understanding the types of visualizations people create by themselves for personal use. As part of this recent direction, we have studied a large collection of whiteboards in a research institution, where people make active use of combinations of words, diagrams and various types of visuals to help them further their thought processes. Our goal is to arrive at a better understanding of the nature of visuals that are created spontaneously during brainstorming, thinking, communicating, and general problem solving on whiteboards. We use the qualitative approaches of open coding, interviewing, and affinity diagramming to explore the use of recognizable and novel visuals, and the interplay between visualization and diagrammatic elements with words, numbers and labels. We discuss the potential implications of our findings on information visualization design.",
"title": ""
},
{
"docid": "086cd2bb956d0064e5286770a10ad4b2",
"text": "In this work, we propose a novel segmental hypergraph representation to model overlapping entity mentions that are prevalent in many practical datasets. We show that our model built on top of such a new representation is able to capture features and interactions that cannot be captured by previous models while maintaining a low time complexity for inference. We also present a theoretical analysis to formally assess how our representation is better than alternative representations reported in the literature in terms of representational power. Coupled with neural networks for feature learning, our model achieves the state-of-the-art performance in three benchmark datasets annotated with overlapping mentions.1",
"title": ""
},
{
"docid": "70221b4a688c01e9093e8f35d68ec982",
"text": "A dominant paradigm for learning-based approaches in computer vision is training generic models, such as ResNet for image recognition, or I3D for video understanding, on large datasets and allowing them to discover the optimal representation for the problem at hand. While this is an obviously attractive approach, it is not applicable in all scenarios. We claim that action detection is one such challenging problem the models that need to be trained are large, and the labeled data is expensive to obtain. To address this limitation, we propose to incorporate domain knowledge into the structure of the model to simplify optimization. In particular, we augment a standard I3D network with a tracking module to aggregate long term motion patterns, and use a graph convolutional network to reason about interactions between actors and objects. Evaluated on the challenging AVA dataset, the proposed approach improves over the I3D baseline by 5.5% mAP and over the state-ofthe-art by 4.8% mAP.",
"title": ""
},
{
"docid": "addad4069782620549e7a357e2c73436",
"text": "Drivable region detection is challenging since various types of road, occlusion or poor illumination condition have to be considered in a outdoor environment, particularly at night. In the past decade, Many efforts have been made to solve these problems, however, most of the already existing methods are designed for visible light cameras, which are inherently inefficient under low light conditions. In this paper, we present a drivable region detection algorithm designed for thermal-infrared cameras in order to overcome the aforementioned problems. The novelty of the proposed method lies in the utilization of on-line road initialization with a highly scene-adaptive sampling mask. Furthermore, our prior road information extraction is tailored to enforce temporal consistency among a series of images. In this paper, we also propose a large number of experiments in various scenarios (on-road, off-road and cluttered road). A total of about 6000 manually annotated images are made available in our website for the research community. Using this dataset, we compared our method against multiple state-of-the-art approaches including convolutional neural network (CNN) based methods to emphasize the robustness of our approach under challenging situations.",
"title": ""
},
{
"docid": "e90b3e8e42e213aab85f10ab325aec06",
"text": "In the strategic human resource management (SHRM) field three approaches have dominated, namely, the universal or best-practice, best-fit or contingency and resourcebased view (RBV). This study investigates evidence for the simultaneous or mixed adoption of these approaches by eight case study firms in the international hotel industry. Findings suggest there is considerable evidence of the combined use of the first two approaches but that the SHRM RBV approach was difficult to achieve by all companies. Overall, gaining differentiation through SHRM practices was found to be challenging due to specific industry forces. The study identifies that where companies derive some competitive advantage from their human resources and HRM practices they have closely aligned their managers’ expertise with their corporate market entry mode expertise and developed some distinctive, complex and integrated HRM interventions, which have a mutually reinforcing effect.",
"title": ""
},
{
"docid": "41e9dac7301e00793c6e4891e07b53fa",
"text": "We present an intriguing property of visual data that we observe in our attempt to isolate the influence of data for learning a visual representation. We observe that we can get better performance than existing model by just conditioning the existing representation on a million unlabeled images without any extra knowledge. As a by-product of this study, we achieve results better than prior state-of-theart for surface normal estimation on NYU-v2 depth dataset, and improved results for semantic segmentation using a selfsupervised representation on PASCAL-VOC 2012 dataset.",
"title": ""
},
{
"docid": "b189ae4140663c4e170b7fc579ce0e98",
"text": "Modern optical systems increasingly rely on DSP techniques for data transmission at 40Gbs and recently at 100Gbs and above. A significant challenge towards CMOS TX DSP SoC integration is due to requirements for four 6b DACs (Fig. 10.8.1) to operate at 56Gs/s with low power and small footprint. To date, the highest sampling rate of 43Gs/s 6b DAC is reported in SiGe BiCMOS process [1]. CMOS DAC implementations are constraint to 12Gs/s with the output signal frequency limited to 1.5GHz [2–4]. This paper demonstrates more than one order of magnitude improvement in 6b CMOS DAC design with a test circuit operating at 56Gs/s, achieving SFDR >30dBc and ENOB>4.3b up to the output frequency of 26.9GHz. Total power dissipation is less than 750mW and the core DAC die area is less than 0.6×0.4 mm2.",
"title": ""
},
{
"docid": "a698752bf7cf82e826848582816b1325",
"text": "The incidence and context of stotting were studied in Thomson's gazelles. Results suggested that gazelles were far more likely to stot in response to coursing predators, such as wild dogs, than they were to stalking predators, such as cheetahs. During hunts, gazelles that wild dogs selected stotted at lower rates than those they did not select. In addition, those which were chased, but which outran the predators, were more likely to stot, and stotted for longer durations, than those which were chased and killed. In response to wild dogs, gazelles in the dry season, which were probably in poor condition, were less likely to stot, and stotted at lower rates, than those in the wet season. We suggest that stotting could be an honest signal of a gazelle's ability to outrun predators, which coursers take into account when selecting prey.",
"title": ""
},
{
"docid": "9d2b3aaf57e31a2c0aa517d642f39506",
"text": "3.1. URINARY TRACT INFECTION Urinary tract infection is one of the important causes of morbidity and mortality in Indian population, affecting all age groups across the life span. Anatomically, urinary tract is divided into an upper portion composed of kidneys, renal pelvis, and ureters and a lower portion made up of urinary bladder and urethra. UTI is an inflammatory response of the urothelium to bacterial invasion that is usually associated with bacteriuria and pyuria. UTI may involve only the lower urinary tract or both the upper and lower tract [19].",
"title": ""
}
] |
scidocsrr
|
7949c3db61506450694b02b78a938c3e
|
Image Generation from Scene Graphs
|
[
{
"docid": "a4d253d6194a9a010660aedb564be39a",
"text": "This work on GGS-NN is motivated by the program verification application, where we need to analyze dynamic data structures created in the heap. On a very high level, in this application a machine learning model analyzes the heap states (a graph with memory nodes and pointers as edges) during the execution of a program and comes up with logical formulas that describes the heap. These logical formulas are then fed into a theorem prover to prove the correctness of the program. Problem-specific node annotations are used to initialize .",
"title": ""
}
] |
[
{
"docid": "ba4dd3419b24b8184d21a738cea4ddf2",
"text": "RESULTS: The first computer system helping with restorations – CEREC (initially Siemens, now Sirona) was implemented about 30 years ago. Many systems are already available to use both in the dental office and the technician’s laboratory. Now every type of ceramic material can be used in a restoration for almost all indications of aesthetic dentistry. The functional and aesthetic restorations for severely damaged primary and permanent children’s teeth require materials which must be biocompatible, mechanically durable during mastication and with unchanging colour. In the literature data there are evidences about Computer Aided Design/Computer Aided Manufacturing (CAD/CAM) technology usage in pediatric dentistry for dental restorations of extensive carious lesions, eroded and abraded teeth, primary teeth with absence of a permanent successor, dental dysplasia or dental trauma of hard tooth tissues.",
"title": ""
},
{
"docid": "06caed57da5784de254b5efcf1724003",
"text": "The validity of any traffic simulation model depends on its ability to generate representative driver acceleration profiles. This paper studies the effectiveness of recurrent neural networks in predicting the acceleration distributions for car following on highways. The long short-term memory recurrent networks are trained and used to propagate the simulated vehicle trajectories over 10-s horizons. On the basis of several performance metrics, the recurrent networks are shown to generally match or outperform baseline methods in replicating driver behavior, including smoothness and oscillatory characteristics present in real trajectories. This paper reveals that the strong performance is due to the ability of the recurrent network to identify recent trends in the ego-vehicle's state, and recurrent networks are shown to perform as, well as feedforward networks with longer histories as inputs.",
"title": ""
},
{
"docid": "594113ed497356eba99b63ddc5c749d7",
"text": "Aspect-based opinion mining is finding elaborate opinions towards a subject such as a product or an event. With explosive growth of opinionated texts on the Web, mining aspect-level opinions has become a promising means for online public opinion analysis. In particular, the boom of various types of online media provides diverse yet complementary information, bringing unprecedented opportunities for cross media aspect-opinion mining. Along this line, we propose CAMEL, a novel topic model for complementary aspect-based opinion mining across asymmetric collections. CAMEL gains information complementarity by modeling both common and specific aspects across collections, while keeping all the corresponding opinions for contrastive study. An auto-labeling scheme called AME is also proposed to help discriminate between aspect and opinion words without elaborative human labeling, which is further enhanced by adding word embedding-based similarity as a new feature. Moreover, CAMEL-DP, a nonparametric alternative to CAMEL is also proposed based on coupled Dirichlet Processes. Extensive experiments on real-world multi-collection reviews data demonstrate the superiority of our methods to competitive baselines. This is particularly true when the information shared by different collections becomes seriously fragmented. Finally, a case study on the public event “2014 Shanghai Stampede” demonstrates the practical value of CAMEL for real-world applications.",
"title": ""
},
{
"docid": "9d918a69a2be2b66da6ecf1e2d991258",
"text": "We designed and implemented TAGME, a system that is able to efficiently and judiciously augment a plain-text with pertinent hyperlinks to Wikipedia pages. The specialty of TAGME with respect to known systems [5,8] is that it may annotate texts which are short and poorly composed, such as snippets of search-engine results, tweets, news, etc.. This annotation is extremely informative, so any task that is currently addressed using the bag-of-words paradigm could benefit from using this annotation to draw upon (the millions of) Wikipedia pages and their inter-relations.",
"title": ""
},
{
"docid": "daacc1387932d7de207b5a3462ee4727",
"text": "Human decision makers in many domains can make use of predictions made by machine learning models in their decision making process, but the usability of these predictions is limited if the human is unable to justify his or her trust in the prediction. We propose a novel approach to producing justifications that is geared towards users without machine learning expertise, focusing on domain knowledge and on human reasoning, and utilizing natural language generation. Through a taskbased experiment, we show that our approach significantly helps humans to correctly decide whether or not predictions are accurate, and significantly increases their satisfaction with the justification.",
"title": ""
},
{
"docid": "ada8c64a2e5c7be58a2200e8d1f64063",
"text": "Nitrogen-containing bioactive alkaloids of plant origin play a significant role in human health and medicine. Several semisynthetic antimitotic alkaloids are successful in anticancer drug development. Gloriosa superba biosynthesizes substantial quantities of colchicine, a bioactive molecule for gout treatment. Colchicine also has antimitotic activity, preventing growth of cancer cells by interacting with microtubules, which could lead to the design of better cancer therapeutics. Further, several colchicine semisynthetics are less toxic than colchicine. Research is being conducted on effective, less toxic colchicine semisynthetic formulations with potential drug delivery strategies directly targeting multiple solid cancers. This article reviews the dynamic state of anticancer drug development from colchicine semisynthetics and natural colchicine production and briefly discusses colchicine biosynthesis.",
"title": ""
},
{
"docid": "71757cd2f861f31759ead3310fbb8383",
"text": "The promise of cloud computing is to provide computing resources instantly whenever they are needed. The state-of-art virtual machine (VM) provisioning technology can provision a VM in tens of minutes. This latency is unacceptable for jobs that need to scale out during computation. To truly enable on-the-fly scaling, new VM needs to be ready in seconds upon request. In this paper, We present an online temporal data mining system called ASAP, to model and predict the cloud VM demands. ASAP aims to extract high level characteristics from VM provisioning request stream and notify the provisioning system to prepare VMs in advance. For quantification issue, we propose Cloud Prediction Cost to encodes the cost and constraints of the cloud and guide the training of prediction algorithms. Moreover, we utilize a two-level ensemble method to capture the characteristics of the high transient demands time series. Experimental results using historical data from an IBM cloud in operation demonstrate that ASAP significantly improves the cloud service quality and provides possibility for on-the-fly provisioning.",
"title": ""
},
{
"docid": "50c660b9087d71513f484ef82075ac73",
"text": "We present a primal-dual interior-point algorithm with a filter line-search method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration phase for the filter method, second-order corrections, and inertia correction of the KKT matrix. Heuristics are also considered that allow faster performance. This method has been implemented in the IPOPT code, which we demonstrate in a detailed numerical study based on 954 problems from the CUTEr test set. An evaluation is made of several line-search options, and a comparison is provided with two state-of-the-art interior-point codes for nonlinear programming.",
"title": ""
},
{
"docid": "8cf224c06eaa32ef8e084d3e90aedc18",
"text": "From a group of 1480 patients, 1036 were treated with metal frame removable partial dentures (RPDs) at least 5 years before this analysis. Of those, 748 patients who wore 886 RPDs were followed up between 5 and 10 years; 288 patients dropped out. The 748 patients in the study groups were wearing 703 conventionally designed metal frame RPDs and 183 RPDs with attachments. When dropout patients and patients who remained in the study were compared, no differences were shown in the variables analyzed, which indicated that the dropouts did not bias the results. Survival rates of the RPDs were calculated by different failure criteria. Taking abutment retreatment as failure criterion, 40% of the conventional RPDs survived 5 years and more than 20% survived 10 years. In RPDs with attachments crowning abutments seemed to retard abutment retreatment. Fracture of the metal frame was found in 10% to 20% of the RPDs after 5 years and in 27% to 44% after 10 years. Extension base RPDs needed more adjustments of the denture base than did tooth-supported base RPDs. Taking replacement or not wearing the RPD as failure criteria, the survival rate was 75% after 5 years and 50% after 10 years (half-life time). The treatment approach in this study was characterized by a simple design of the RPD and regular surveillance of the patient in a recall system.",
"title": ""
},
{
"docid": "eee0bc6ee06dce38efbc89659771f720",
"text": "In a data center, an IO from an application to distributed storage traverses not only the network, but also several software stages with diverse functionality. This set of ordered stages is known as the storage or IO stack. Stages include caches, hypervisors, IO schedulers, file systems, and device drivers. Indeed, in a typical data center, the number of these stages is often larger than the number of network hops to the destination. Yet, while packet routing is fundamental to networks, no notion of IO routing exists on the storage stack. The path of an IO to an endpoint is predetermined and hard-coded. This forces IO with different needs (e.g., requiring different caching or replica selection) to flow through a one-size-fits-all IO stack structure, resulting in an ossified IO stack. This paper proposes sRoute, an architecture that provides a routing abstraction for the storage stack. sRoute comprises a centralized control plane and “sSwitches” on the data plane. The control plane sets the forwarding rules in each sSwitch to route IO requests at runtime based on application-specific policies. A key strength of our architecture is that it works with unmodified applications and VMs. This paper shows significant benefits of customized IO routing to data center tenants (e.g., a factor of ten for tail IO latency, more than 60% better throughput for a customized replication protocol and a factor of two in throughput for customized caching).",
"title": ""
},
{
"docid": "31345b00f5da1dae98ec920d0336febf",
"text": "Navpreet Singh M. tech Scholar, CSE & IT Deptt., BBSB Engineering College, Fatehgarh Sahib, Punjab, India (IKG – Punjab Technical University, Jalandhar) [email protected] Dr. Kanwalvir Singh Dhindsa Professor, CSE & IT Deptt., BBSB Engineering College, Fatehgarh Sahib, Punjab, India (IKG – Punjab Technical University, Jalandhar) [email protected] -------------------------------------------------------------------ABSTRACT--------------------------------------------------------------Load balancing is a computer networking method to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The load balancing service is usually provided by dedicated software or hardware, such as a multilayer switch or a Domain Name System server. In this paper, the existing static algorithms used for simple cloud load balancing have been identified and also a hybrid algorithm for developments in the future is suggested.",
"title": ""
},
{
"docid": "c5f7f42ac022f3e340ebf3cdb4427723",
"text": "In this paper, it is proposed to use spatial differentials instead of independently actuated cables to drive cable robots. Spatial cable differentials are constituted of several cables attaching the moving platform to the base but all of these cables are pulled by the same actuator through a differential system. To this aim, cable differentials with both planar and spatial architectures are first described in this work and then, their resultant properties on the force distribution is presented. Next, a special cable differential is selected and used to design the architecture of two incompletely and fully restrained robots. Finally, by comparing the workspaces of these robots with their classically actuated counterparts, the advantage of using differentials on their wrench-closure and wrench-feasible workspaces is illustrated.",
"title": ""
},
{
"docid": "a9a3c033b6467464b1f926ed9119a1cc",
"text": "Media mix modeling is a statistical analysis on historical data to measure the return on investment (ROI) on advertising and other marketing activities. Current practice usually utilizes data aggregated at a national level, which often suffers from small sample size and insufficient variation in the media spend. When sub-national data is available, we propose a geo-level Bayesian hierarchical media mix model (GBHMMM), and demonstrate that the method generally provides estimates with tighter credible intervals compared to a model with national level data alone. This reduction in error is due to having more observations and useful variability in media spend, which can protect advertisers from unsound reallocation decisions. Under some weak conditions, the geo-level model can reduce ad targeting bias. When geo-level data is not available for all the media channels, the geo-level model estimates generally deteriorate as more media variables are imputed using the national level data.",
"title": ""
},
{
"docid": "7c05970c34f98bf1d923ae0de76172ce",
"text": "In the continual battle between malware attacks and antivirus technologies, both sides strive to deploy their techniques at always lower layers in the software system stack. The goal is to monitor and control the software executing in the levels above their own deployment, to detect attacks or to defeat defenses. Recent antivirus solutions have gone even below the software, by enlisting hardware support. However, so far, they have only mimicked classic software techniques by monitoring software clues of an attack. As a result, malware can easily defeat them by employing metamorphic manifestation patterns. With this work, we propose a hardware-monitoring solution, SNIFFER, which tracks malware manifestations in system-level behavior, rather than code patterns, and it thus cannot be circumvented unless malware renounces its very nature, that is, to attack. SNIFFER leverages in-hardware feature monitoring, and uses machine learning to assess whether a system shows signs of an attack. Experiments with a virtual SNIFFER implementation, which supports 13 features and tests against five common network-based malicious behaviors, show that SNIFFER detects malware nearly 100% of the time, unless the malware aggressively throttle its attack. Our experiments also highlight the need for machine-learning classifiers employing a range of diverse system features, as many of the tested malware require multiple, seemingly disconnected, features for accurate detection.",
"title": ""
},
{
"docid": "84a01029714dfef5d14bc4e2be78921e",
"text": "Integrating frequent pattern mining with interactive visualization for temporal event sequence analysis poses many interesting research questions and challenges. We review and reflect on some of these challenges based on our experiences working on event sequence data from two domains: web analytics and application logs. These challenges can be organized using a three-stage framework: pattern mining, pattern pruning and interactive visualization.",
"title": ""
},
{
"docid": "48b9bdb75ad7dba87ffcb9516a7ba032",
"text": "The historic background of algorithmic processing with regard to etymology and methodology is translated into terms of mathematical logic and Computer Science. A formal logic structure is introduced by exemplary questions posed to Fiqh-chapters to define a logic query language. As a foundation, a generic algorithm for deciding Fiqhrulings is designed to enable and further leverage rule of law (vs. rule by law) with full transparency and complete algorithmic coverage of Islamic law eventually providing legal security, legal equality, and full legal accountability. This is implemented by disentangling and reinstating classic Fiqh-methodology (usul al-Fiqh) with the expressive power of subsets of First Order Logic (FOL) sustainably substituting ad hoc reasoning with falsifiable rational argumentation. The results are discussed in formal terms of completeness, decidability and complexity of formal Fiqh-systems. An Entscheidungsproblem for formal Fiqh-Systems is formulated and validated.",
"title": ""
},
{
"docid": "f72160ed6188424481fecbf4cb7ee31a",
"text": "AIMS AND OBJECTIVES\nThe aim of this study was to identify factors that influence nurse's decisions to question concerning aspects of medication administration within the context of a neonatal clinical care unit.\n\n\nBACKGROUND\nMedication error in the neonatal setting can be high with this particularly vulnerable population. As the care giver responsible for medication administration, nurses are deemed accountable for most errors. However, they are recognised as the forefront of prevention. Minimal evidence is available around reasoning, decision making and questioning around medication administration. Therefore, this study focuses upon addressing the gap in knowledge around what nurses believe influences their decision to question.\n\n\nDESIGN\nA critical incident design was employed where nurses were asked to describe clinical incidents around their decision to question a medication issue. Nurses were recruited from a neonatal clinical care unit and participated in an individual digitally recorded interview.\n\n\nRESULTS\nOne hundred and three nurses participated between December 2013-August 2014. Use of the constant comparative method revealed commonalities within transcripts. Thirty-six categories were grouped into three major themes: 'Working environment', 'Doing the right thing' and 'Knowledge about medications'.\n\n\nCONCLUSIONS\nFindings highlight factors that influence nurses' decision to question issues around medication administration. Nurses feel it is their responsibility to do the right thing and speak up for their vulnerable patients to enhance patient safety. Negative dimensions within the themes will inform planning of educational strategies to improve patient safety, whereas positive dimensions must be reinforced within the multidisciplinary team.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThe working environment must support nurses to question and ultimately provide safe patient care. Clear and up to date policies, formal and informal education, role modelling by senior nurses, effective use of communication skills and a team approach can facilitate nurses to appropriately question aspects around medication administration.",
"title": ""
},
{
"docid": "3b2db7bd323243676cc24b2af506564b",
"text": "Scenarios are possible future states of the world that represent alternative plausible conditions under different assumptions. Often, scenarios are developed in a context relevant to stakeholders involved in their applications since the evaluation of scenario outcomes and implications can enhance decision-making activities. This paper reviews the state-of-the-art of scenario development and proposes a formal approach to scenario development in environmental decision-making. The discussion of current issues in scenario studies includes advantages and obstacles in utilizing a formal scenario development framework, and the different forms of uncertainty inherent in scenario development, as well as how they should be treated. An appendix for common scenario terminology has been attached for clarity. Major recommendations for future research in this area include proper consideration of uncertainty in scenario studies in particular in relation to stakeholder relevant information, construction of scenarios that are more diverse in nature, and sharing of information and resources among the scenario development research community. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "572348e4389acd63ea7c0667e87bbe04",
"text": "Through the analysis of collective upvotes and downvotes in multiple social media, we discover the bimodal regime of collective evaluations. When online content surpasses the local social context by reaching a threshold of collective attention, negativity grows faster with positivity, which serves as a trace of the burst of a filter bubble. To attain a global audience, we show that emotions expressed in online content has a significant effect and also play a key role in creating polarized opinions.",
"title": ""
},
{
"docid": "62ee27985dc4c75e0b0ef9d7e93968e8",
"text": "Based on customer cognitive, affective and conative experiences in Internet online shopping, this study, from customers’ perspectives, develops a conceptual framework for e-CRM to explain the psychological process that customers maintain a long-term exchange relationship with specific online retailer. The conceptual framework proposes a series of causal linkages among the key variables affecting customer commitment to specific online retailer, such as perceived value (as cognitive belief), satisfaction (as affective experience) and trust (as conative relationship intention). Three key exogenous variables affecting Internet online shopping experiences, such as perceived service quality, perceived product quality, and perceived price fairness, are integrated into the framework. This study empirically tested and supported a large part of the proposed framework and the causal linkages within it. The empirical results highlight some managerial implications for successfully developing and implementing a strategy for e-CRM.",
"title": ""
}
] |
scidocsrr
|
1a84c4fe0c77d42aaf4590cdfc2eacba
|
Reinforcement learning for robot soccer
|
[
{
"docid": "274a88ca3f662b6250d856148389b078",
"text": "This paper introduces NFQ, an algorithm for efficient and effective training of a Q-value function represented by a multi-layer perceptron. Based on the principle of storing and reusing transition experiences, a model-free, neural network based Reinforcement Learning algorithm is proposed. The method is evaluated on three benchmark problems. It is shown empirically, that reasonably few interactions with the plant are needed to generate control policies of high quality.",
"title": ""
}
] |
[
{
"docid": "4da3f01ac76da39be45ab39c1e46bcf0",
"text": "Depth cameras are low-cost, plug & play solution to generate point cloud. 3D depth camera yields depth images which do not convey the actual distance. A 3D camera driver does not support raw depth data output, these are usually filtered and calibrated as per the sensor specifications and hence a method is required to map every pixel back to its original point in 3D space. This paper demonstrates the method to triangulate a pixel from the 2D depth image back to its actual position in 3D space. Further this method illustrates the independence of this mapping operation, which facilitates parallel computing. Triangulation method and ratios between the pixel positions and camera parameters are used to estimate the true position in 3D space. The algorithm performance can be increased by 70% by the usage of TPL libraries. This performance differs from processor to processor",
"title": ""
},
{
"docid": "336c787fe3a3b81b8ee4193802499376",
"text": "In this document, a real-time fog detection system using an on-board low cost b&w camera, for a driving application, is presented. This system is based on two clues: estimation of the visibility distance, which is calculated from the camera projection equations and the blurring due to the fog. Because of the water particles floating in the air, sky light gets diffuse and, focus on the road zone, which is one of the darkest zones on the image. The apparent effect is that some part of the sky introduces in the road. Also in foggy scenes, the border strength is reduced in the upper part of the image. These two sources of information are used to make this system more robust. The final purpose of this system is to develop an automatic vision-based diagnostic system for warning ADAS of possible wrong working conditions. Some experimental results and the conclusions about this work are presented.",
"title": ""
},
{
"docid": "cbda3aafb8d8f76a8be24191e2fa7c54",
"text": "With the rapid development of robot and other intelligent and autonomous agents, how a human could be influenced by a robot’s expressed mood when making decisions becomes a crucial question in human-robot interaction. In this pilot study, we investigate (1) in what way a robot can express a certain mood to influence a human’s decision making behavioral model; (2) how and to what extent the human will be influenced in a game theoretic setting. More specifically, we create an NLP model to generate sentences that adhere to a specific affective expression profile. We use these sentences for a humanoid robot as it plays a Stackelberg security game against a human. We investigate the behavioral model of the human player.",
"title": ""
},
{
"docid": "6b6542a97c846875e2b6c9ba76557715",
"text": "The motivation behind working on a translation system from Telugu to English were based on the principles that a) There are many translation systems for translating from English to Indian languages but very few for vice versa. Telugu is a language that exhibits very strong phrasal, word and sentence structures next to Sanskrit, which makes the work organized on one hand but complex in handling on the other. This work demonstrates one such machine translation (MT) system for translating simple and moderately complex sentences from Telugu to English. b) Of the many MT approaches, the direct MT is used for translation between similar or nearly related languages. However, the direct MT has been used in this work for conversion from Telugu to English, which is quite complex compared to other Indian languages. The purpose of using direct MT for development of such a tool was to have the flexibility in usage, keeping it simple, look for rapid development and primarily to have better accuracy than all the known system. c) There are very large numbers of elisions/ inflection rules in Telugu requiring complex morphs, like those in Sanskrit. A large number of rules for handling inflections were to be developed along with the grammar rules. The outcomes were compared with Google Translator, a publicly available translation web based system. The outcomes were found to be much better, as much as 90 percent more accurate. This work shall bring forth deeper insights into Telugu MT research. KeywordsMachine translation (MT), direct MT, Telugu to English, natural language processing (NLP), elisions, inflections.",
"title": ""
},
{
"docid": "305ae3e7a263bb12f7456edca94c06ca",
"text": "We study the effects of changes in uncertainty about future fiscal policy on aggregate economic activity. In light of large fiscal deficits and high public debt levels in the U.S., a fiscal consolidation seems inevitable. However, there is notable uncertainty about the policy mix and timing of such a budgetary adjustment. To evaluate the consequences of the increased uncertainty, we first estimate tax and spending processes for the U.S. that allow for timevarying volatility. We then feed these processes into an otherwise standard New Keynesian business cycle model calibrated to the U.S. economy. We find that fiscal volatility shocks can have a sizable adverse effect on economic activity.",
"title": ""
},
{
"docid": "331391539cd5a226e9389f96f815fa0d",
"text": "Understanding protein function from amino acid sequence is a fundamental problem in biology. In this project, we explore how well we can represent biological function through examination of raw sequence alone. Using a large corpus of protein sequences and their annotated protein families, we learn dense vector representations for amino acid sequences using the co-occurrence statistics of short fragments. Then, using this representation, we experiment with several neural network architectures to train classifiers for protein family identification. We show good performance for a multi-class prediction problem with 589 protein family classes.",
"title": ""
},
{
"docid": "1a063741d53147eb6060a123bff96c27",
"text": "OBJECTIVE\nThe assessment of cognitive functions of adults with attention deficit hyperactivity disorder (ADHD) comprises self-ratings of cognitive functioning (subjective assessment) as well as psychometric testing (objective neuropsychological assessment). The aim of the present study was to explore the utility of these assessment strategies in predicting neuropsychological impairments of adults with ADHD as determined by both approaches.\n\n\nMETHOD\nFifty-five adults with ADHD and 66 healthy participants were assessed with regard to cognitive functioning in several domains by employing subjective and objective measurement tools. Significance and effect sizes for differences between groups as well as the proportion of patients with impairments were analyzed. Furthermore, logistic regression analyses were carried out in order to explore the validity of subjective and objective cognitive measures in predicting cognitive impairments.\n\n\nRESULTS\nBoth subjective and objective assessment tools revealed significant cognitive dysfunctions in adults with ADHD. The majority of patients displayed considerable impairments in all cognitive domains assessed. A comparison of effect sizes, however, showed larger dysfunctions in the subjective assessment than in the objective assessment. Furthermore, logistic regression models indicated that subjective cognitive complaints could not be predicted by objective measures of cognition and vice versa.\n\n\nCONCLUSIONS\nSubjective and objective assessment tools were found to be sensitive in revealing cognitive dysfunctions of adults with ADHD. Because of the weak association between subjective and objective measurements, it was concluded that subjective and objective measurements are both important for clinical practice but may provide distinct types of information and capture different aspects of functioning.",
"title": ""
},
{
"docid": "d3fd8c1ce41892f54aedff187f4872c2",
"text": "In the first year of the TREC Micro Blog track, our participation has focused on building from scratch an IR system based on the Whoosh IR library. Though the design of our system (CipCipPy) is pretty standard it includes three ad-hoc solutions for the track: (i) a dedicated indexing function for hashtags that automatically recognizes the distinct words composing an hashtag, (ii) expansion of tweets based on the title of any referred Web page, and (iii) a tweet ranking function that ranks tweets in results by their content quality, which is compared against a reference corpus of Reuters news. In this preliminary paper we describe all the components of our system, and the efficacy scored by our runs. The CipCipPy system is available under a GPL license.",
"title": ""
},
{
"docid": "0e01161f02dcf14e555c4918ff762a0e",
"text": "Semi-implicit variational inference (SIVI) is introduced to expand the commonly used analytic variational distribution family, by mixing the variational parameter with a flexible distribution. This mixing distribution can assume any density function, explicit or not, as long as independent random samples can be generated via reparameterization. Not only does SIVI expand the variational family to incorporate highly flexible variational distributions, including implicit ones that have no analytic density functions, but also sandwiches the evidence lower bound (ELBO) between a lower bound and an upper bound, and further derives an asymptotically exact surrogate ELBO that is amenable to optimization via stochastic gradient ascent. With a substantially expanded variational family and a novel optimization algorithm, SIVI is shown to closely match the accuracy of MCMC in inferring the posterior in a variety of Bayesian inference tasks.",
"title": ""
},
{
"docid": "3fef35dd088cb4f84eabc64b2a570e4c",
"text": "Two separate transmitarrays that operate at 77 GHz are designed and fabricated. The first transmitarray acts as a quarter-wave plate that transforms a linearly polarized incident wave into a circularly polarized transmitted wave. The second transmitarray acts as both a quarter-wave plate and a beam refracting surface to provide polarization and wavefront control. When the second transmittarray is illuminated with a normally incident, linearly polarized beam, the transmitted field is efficiently refracted to 45 °, and the polarization is converted to circular. The half-power bandwidth was measured to be 17%, and the axial ratio of the transmitted field remained below 2.5 dB over the entire bandwidth. Both designs have a subwavelength thickness of 0.4 mm (λ°/9.7). The developed structures are fabricated with low-cost printed-circuit-board processes on flexible substrates. The transmitarrays are realized by cascading three patterned metallic surfaces (sheet admittances) to achieve complete phase control, while maintaining high transmission. Polarization conversion is accomplished with anisotropic sheets that independently control the field polarized along the two orthogonal axes. The structures are analyzed with both circuit- and fields-based approaches.",
"title": ""
},
{
"docid": "e8318c6ef6d710b9da6ed4dff50066ec",
"text": "Convolution is one of the most important operators used in image processing. With the constant need to increase the performance in high-end applications and the rise and popularity of parallel architectures, such as GPUs and the ones implemented in FPGAs, comes the necessity to compare these architectures in order to determine which of them performs better and in what scenario. In this article, convolution was implemented in each of the aforementioned architectures with the following languages: CUDA for GPUs and Verilog for FPGAs. In addition, the same algorithms were also implemented in MATLAB, using predefined operations and in C using a regular x86 quad-core processor. Comparative performance measures, considering the execution time and the clock ratio, were taken and commented in the paper. Overall, it was possible to achieve a CUDA speedup of roughly 200× in comparison to C, 70× in comparison to Matlab and 20× in comparison to FPGA.",
"title": ""
},
{
"docid": "40939d3a4634498fb50c0cda9e31f476",
"text": "Learning analytics is receiving increased attention, in part because it offers to assist educational institutions in increasing student retention, improving student success, and easing the burden of accountability. Although these large-scale issues are worthy of consideration, faculty might also be interested in how they can use learning analytics in their own courses to help their students succeed. In this paper, we define learning analytics, how it has been used in educational institutions, what learning analytics tools are available, and how faculty can make use of data in their courses to monitor and predict student performance. Finally, we discuss several issues and concerns with the use of learning analytics in higher education. Have you ever had the sense at the start of a new course or even weeks into the semester that you could predict which students will drop the course or which students will succeed? Of course, the danger of this realization is that it may create a self-fulfilling prophecy or possibly be considered “profiling”. But it could also be that you have valuable data in your head, collected from semesters of experience, that can help you predict who will succeed and who will not based on certain variables. In short, you likely have hunches based on an accumulation of experience. The question is, what are those variables? What are those data? And how well will they help you predict student performance and retention? More importantly, how will those data help you to help your students succeed in your course? Such is the promise of learning analytics. Learning analytics is defined as “the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (Long & Siemens, 2011, p. 32). Learning analytics offers promise for predicting and improving student success and retention (e.g., Olmos & Corrin, 2012; Smith, Lange, & Huston, 2012) in part because it allows faculty, institutions, and students to make data-driven decisions about student success and retention. Data-driven decision making involves making use of data, such as the sort provided in Learning Management Systems (LMS), to inform educator’s judgments (Jones, 2012; Long & Siemens, 2011; Picciano, 2012). For example, to argue for increased funding to support student preparation for a course or a set of courses, it would be helpful to have data showing that students who have certain skills or abilities or prior coursework perform better in the class or set of classes than those who do not. Journal of Interactive Online Learning Dietz-Uhler & Hurn",
"title": ""
},
{
"docid": "3ddcf5f0e4697a0d43eff2cca77a1ab7",
"text": "Lymph nodes are assessed routinely in clinical practice and their size is followed throughout radiation or chemotherapy to monitor the effectiveness of cancer treatment. This paper presents a robust learning-based method for automatic detection and segmentation of solid lymph nodes from CT data, with the following contributions. First, it presents a learning based approach to solid lymph node detection that relies on marginal space learning to achieve great speedup with virtually no loss in accuracy. Second, it presents a computationally efficient segmentation method for solid lymph nodes (LN). Third, it introduces two new sets of features that are effective for LN detection, one that self-aligns to high gradients and another set obtained from the segmentation result. The method is evaluated for axillary LN detection on 131 volumes containing 371 LN, yielding a 83.0% detection rate with 1.0 false positive per volume. It is further evaluated for pelvic and abdominal LN detection on 54 volumes containing 569 LN, yielding a 80.0% detection rate with 3.2 false positives per volume. The running time is 5-20 s per volume for axillary areas and 15-40 s for pelvic. An added benefit of the method is the capability to detect and segment conglomerated lymph nodes.",
"title": ""
},
{
"docid": "17a336217e717dfedd7fa9f96a28da80",
"text": "Context: Competitions for self-driving cars facilitated the development and research in the domain of autonomous vehicles towards potential solutions for the future mobility. Objective: Miniature vehicles can bridge the gap between simulation-based evaluations of algorithms relying on simplified models, and those time-consuming vehicle tests on real-scale proving grounds. Method: This article combines findings from a systematic literature review, an in-depth analysis of results and technical concepts from contestants in a competition for self-driving miniature cars, and experiences of participating in the 2013 competition for self-driving cars. Results: A simulation-based development platform for real-scale vehicles has been adapted to support the development of a self-driving miniature car. Furthermore, a standardized platform was designed and realized to enable research and experiments in the context of future mobility solutions. Conclusion: A clear separation between algorithm conceptualization and validation in a model-based simulation environment enabled efficient and riskless experiments and validation. The design of a reusable, low-cost, and energy-efficient hardware architecture utilizing a standardized software/hardware interface enables experiments, which would otherwise require resources like a large real-scale",
"title": ""
},
{
"docid": "ea7acc555f2cb2de898a3706c31006db",
"text": "Securing the supply chain of integrated circuits is of utmost importance to computer security. In addition to counterfeit microelectronics, the theft or malicious modification of designs in the foundry can result in catastrophic damage to critical systems and large projects. In this letter, we describe a 3-D architecture that splits a design into two separate tiers: one tier that contains critical security functions is manufactured in a trusted foundry; another tier is manufactured in an unsecured foundry. We argue that a split manufacturing approach to hardware trust based on 3-D integration is viable and provides several advantages over other approaches.",
"title": ""
},
{
"docid": "e9d5ba66ddcc3a38020f532414ebeef7",
"text": "Current theories of aspect acknowledge the pervasiveness of verbs of variable telicity, and are designed to account both for why these verbs show such variability and for the complex conditions that give rise to telic and atelic interpretations. Previous work has identified several sets of such verbs, including incremental theme verbs, such as eat and destroy; degree achievements, such as cool and widen; and (a)telic directed motion verbs, such as ascend and descend (see e.g., Dowty 1979; Declerck 1979; Dowty 1991; Krifka 1989, 1992; Tenny 1994; Bertinetto and Squartini 1995; Levin and Rappaport Hovav 1995; Jackendoff 1996; Ramchand 1997; Filip 1999; Hay, Kennedy, and Levin 1999; Rothstein 2003; Borer 2005). As the diversity in descriptive labels suggests, most previous work has taken these classes to embody distinct phenomena and to have distinct lexical semantic analyses. We believe that it is possible to provide a unified analysis in which the behavior of all of these verbs stems from a single shared element of their meanings: a function that measures the degree to which an object changes relative to some scalar dimension over the course of an event. We claim that such ‘measures of change’ are based on the more general kinds of measure functions that are lexicalized in many languages by gradable adjectives, and that map an object to a scalar value that represents the degree to which it manifests some gradable property at a time (see Bartsch and Vennemann 1972,",
"title": ""
},
{
"docid": "d878e4bb4b17901a36c2cf7235c4568f",
"text": "Cloud computing is the future generation of computational services delivered over the Internet. As cloud infrastructure expands, resource management in such a large heterogeneous and distributed environment is a challenging task. In a cloud environment, uncertainty and dispersion of resources encounters problems of allocation of resources. Unfortunately, existing resource management techniques, frameworks and mechanisms are insufficient to handle these environments, applications and resource behaviors. To provide an efficient performance and to execute workloads, there is a need of quality of service (QoS) based autonomic resource management approach which manages resources automatically and provides reliable, secure and cost efficient cloud services. In this paper, we present an intelligent QoS-aware autonomic resource management approach named as CHOPPER (Configuring, Healing, Optimizing and Protecting Policy for Efficient Resource management). CHOPPER offers self-configuration of applications and resources, self-healing by handling sudden failures, self-protection against security attacks and self-optimization for maximum resource utilization. We have evaluated the performance of the proposed approach in a real cloud environment and the experimental results show that the proposed approach performs better in terms of cost, execution time, SLA violation, resource contention and also provides security against attacks.",
"title": ""
},
{
"docid": "a08e91040414d6bbec156a5ee90d854d",
"text": "MapReduce has emerged as an important paradigm for processing data in large data centers. MapReduce is a three phase algorithm comprising of Map, Shuffle and Reduce phases. Due to its widespread deployment, there have been several recent papers outlining practical schemes to improve the performance of MapReduce systems. All these efforts focus on one of the three phases to obtain performance improvement. In this paper, we consider the problem of jointly scheduling all three phases of the MapReduce process with a view of understanding the theoretical complexity of the joint scheduling and working towards practical heuristics for scheduling the tasks. We give guaranteed approximation algorithms and outline several heuristics to solve the joint scheduling problem.",
"title": ""
},
{
"docid": "6f6667e4c485978b566d25837083b565",
"text": "Topic models provide a powerful tool for analyzing large text collections by representing high dimensional data in a low dimensional subspace. Fitting a topic model given a set of training documents requires approximate inference techniques that are computationally expensive. With today's large-scale, constantly expanding document collections, it is useful to be able to infer topic distributions for new documents without retraining the model. In this paper, we empirically evaluate the performance of several methods for topic inference in previously unseen documents, including methods based on Gibbs sampling, variational inference, and a new method inspired by text classification. The classification-based inference method produces results similar to iterative inference methods, but requires only a single matrix multiplication. In addition to these inference methods, we present SparseLDA, an algorithm and data structure for evaluating Gibbs sampling distributions. Empirical results indicate that SparseLDA can be approximately 20 times faster than traditional LDA and provide twice the speedup of previously published fast sampling methods, while also using substantially less memory.",
"title": ""
}
] |
scidocsrr
|
a94d75d9f9ab0d00da601fd4cb4a52d8
|
Love & Loans The Effect of Beauty and Personal Characteristics in Credit Markets∗
|
[
{
"docid": "7440cb90073c8d8d58e28447a1774b2c",
"text": "Common maxims about beauty suggest that attractiveness is not important in life. In contrast, both fitness-related evolutionary theory and socialization theory suggest that attractiveness influences development and interaction. In 11 meta-analyses, the authors evaluate these contradictory claims, demonstrating that (a) raters agree about who is and is not attractive, both within and across cultures; (b) attractive children and adults are judged more positively than unattractive children and adults, even by those who know them; (c) attractive children and adults are treated more positively than unattractive children and adults, even by those who know them; and (d) attractive children and adults exhibit more positive behaviors and traits than unattractive children and adults. Results are used to evaluate social and fitness-related evolutionary theories and the veracity of maxims about beauty.",
"title": ""
}
] |
[
{
"docid": "f400ca4fe8fc5c684edf1ae60e026632",
"text": "Driverless vehicles will be common on the road in a short time. They will have many impacts on the global transport market trends. One of the remarkable driverless vehicles impacts will be the laying aside of rail systems, because of several reasons, that is to say traffic congestions will be no more a justification for rail, rail will not be the best answer for disableds, air pollution of cars are more or less equal to air pollution of trains and the last but not least reason is that driverless cars are safer than trains.",
"title": ""
},
{
"docid": "9330c2308883a44b58bb18a7e9de7748",
"text": "In this paper, model predictive control (MPC) strategy is implemented to a GE9001E gas turbine power plant. A linear model is developed for the gas turbine using conventional mathematical models and ARX identification procedure. Also a process control model is identified for system outputs prediction. The controller is designed in order to adjust the exhaust gas temperature and the rotor speed by compressor inlet guide vane (IGV) position and fuel signals. The proposed system is simulated under load demand disturbances. It is shown that MPC controller can maintain the rotor speed and exhaust gas temperature more accurately in comprehension with both SpeedTronicTM control system and conventional PID control. Key-words: Gas turbine, Identification, ARX, Predictive control, Power plant, Modeling, Multivariable control, PID",
"title": ""
},
{
"docid": "c0296c76b81846a9125b399e6efd2238",
"text": "Three Guanella-type transmission line transformers (TLT) are presented: a coiled TLT on a GaAs substrate, a straight ferriteless TLT on a multilayer PCB and a straight hybrid TLT that employs semi-rigid coaxial cables and a ferrite. All three devices have 4:1 impedance transformation ratio, matching 12.5 /spl Omega/ to 50 /spl Omega/. Extremely broadband operation is achieved. A detailed description of the devices and their operational principle is given. General aspects of the design of TLT are discussed.",
"title": ""
},
{
"docid": "02bd3ca492a58e3007c115401419a8ca",
"text": "This paper presents a hybrid predictive model for forecasting intraday stock prices. The proposed model hybridizes the variational mode decomposition (VMD) which is a new multiresolution technique with backpropagation neural network (BPNN). The VMD is used to decompose price series into a sum of variational modes (VM). The extracted VM are used to train BPNN. Besides, particle swarm optimization (PSO) is employed for BPNN initial weights optimization. Experimental results from a set of six stocks show the superiority of the hybrid VMD–PSO–BPNN predictive model over the baseline predictive model eywords: ariational mode decomposition rtificial neural networks article swarm optimization ntraday stock price which is a PSO–BPNN model trained with past prices. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "825888e4befcbf6b492143a13928a34e",
"text": "Sentiment analysis is one of the prominent fields of data mining that deals with the identification and analysis of sentimental contents generally available at social media. Twitter is one of such social medias used by many users about some topics in the form of tweets. These tweets can be analyzed to find the viewpoints and sentiments of the users by using clustering-based methods. However, due to the subjective nature of the Twitter datasets, metaheuristic-based clustering methods outperforms the traditional methods for sentiment analysis. Therefore, this paper proposes a novel metaheuristic method (CSK) which is based on K-means and cuckoo search. The proposed method has been used to find the optimum cluster-heads from the sentimental contents of Twitter dataset. The efficacy of proposed method has been tested on different Twitter datasets and compared with particle swarm optimization, differential evolution, cuckoo search, improved cuckoo search, gauss-based cuckoo search, and two n-grams methods. Experimental results and statistical analysis validate that the proposed method outperforms the existing methods. The proposed method has theoretical implications for the future research to analyze the data generated through social networks/medias. This method has also very generalized practical implications for designing a system that can provide conclusive reviews on any social issues. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "968797472eeedd75ff9b89909bc4f84d",
"text": "In this paper, we investigate the issue of minimizing data center energy usage. In particular, we formulate a problem of virtual machine placement with the objective of minimizing the total power consumption of all the servers. To do this, we examine a CPU power consumption model and then incorporate the model into an mixed integer programming formulation. In order to find optimal or near-optimal solutions fast, we resolve two difficulties: non-linearity of the power model and integer decision variables. We first show how to linearize the problem, and then give a relaxation and iterative rounding algorithm. Computation experiments have shown that the algorithm can solve the problem much faster than the standard integer programming algorithms, and it consistently yields near-optimal solutions. We also provide a heuristic min-cost algorithm, which finds less optimal solutions but works even faster.",
"title": ""
},
{
"docid": "dfa890a87b2e5ac80f61c793c8bca791",
"text": "Reinforcement learning (RL) algorithms have traditionally been thought of as trial and error learning methods that use actual control experience to incrementally improve a control policy. Sutton's DYNA architecture demonstrated that RL algorithms can work as well using simulated experience from an environment model, and that the resulting computation was similar to doing one-step lookahead planning. Inspired by the literature on hierarchical planning, I propose learning a hierarchy of models of the environment that abstract temporal detail as a means of improving the scalability of RL algorithms. I present H-DYNA (Hierarchical DYNA), an extension to Sutton's DYNA architecture that is able to learn such a hierarchy of abstract models. H-DYNA di ers from hierarchical planners in two ways: rst, the abstract models are learned using experience gained while learning to solve other tasks in the same environment, and second, the abstract models can be used to solve stochastic control tasks. Simulations on a set of compositionally-structured navigation tasks show that H-DYNA can learn to solve them faster than conventional RL algorithms. The abstract models also serve as mechanisms for achieving transfer of learning across multiple tasks.",
"title": ""
},
{
"docid": "6650966d57965a626fd6f50afe6cd7a4",
"text": "This paper presents a generalized version of the linear threshold model for simulating multiple cascades on a network while allowing nodes to switch between them. The proposed model is shown to be a rapidly mixing Markov chain and the corresponding steady state distribution is used to estimate highly likely states of the cascades' spread in the network. Results on a variety of real world networks demonstrate the high quality of the estimated solution.",
"title": ""
},
{
"docid": "6981b51813c8e9914f8dc4b965a81fd4",
"text": "Search result diversification has been effectively employed to tackle query ambiguity, particularly in the context of web search. However, ambiguity can manifest differently in different search verticals, with ambiguous queries spanning, e.g., multiple place names, content genres, or time periods. In this paper, we empirically investigate the need for diversity across four different verticals of a commercial search engine, including web, image, news, and product search. As a result, we introduce the problem of aggregated search result diversification as the task of satisfying multiple information needs across multiple search verticals. Moreover, we propose a probabilistic approach to tackle this problem, as a natural extension of state-of-the-art diversification approaches. Finally, we generalise standard diversity metrics, such as ERR-IA and α-nDCG, into a framework for evaluating diversity across multiple search verticals.",
"title": ""
},
{
"docid": "9c799b4d771c724969be7b392697ebee",
"text": "Search engines need to model user satisfaction to improve their services. Since it is not practical to request feedback on searchers' perceptions and search outcomes directly from users, search engines must estimate satisfaction from behavioral signals such as query refinement, result clicks, and dwell times. This analysis of behavior in the aggregate leads to the development of global metrics such as satisfied result clickthrough (typically operationalized as result-page clicks with dwell time exceeding a particular threshold) that are then applied to all searchers' behavior to estimate satisfac-tion levels. However, satisfaction is a personal belief and how users behave when they are satisfied can also differ. In this paper we verify that searcher behavior when satisfied and dissatisfied is indeed different among individual searchers along a number of dimensions. As a result, we introduce and evaluate learned models of satisfaction for individual searchers and searcher cohorts. Through experimentation via logs from a large commercial Web search engine, we show that our proposed models can predict search satisfaction more accurately than a global baseline that applies the same satisfaction model across all users. Our findings have implications for the study and application of user satisfaction in search systems.",
"title": ""
},
{
"docid": "c53021193518ebdd7006609463bafbcc",
"text": "BACKGROUND AND OBJECTIVES\nSleep is important to child development, but there is limited understanding of individual developmental patterns of sleep, their underlying determinants, and how these influence health and well-being. This article explores the presence of various sleep patterns in children and their implications for health-related quality of life.\n\n\nMETHODS\nData were collected from the Longitudinal Study of Australian Children. Participants included 2926 young children followed from age 0 to 1 years to age 6 to 7 years. Data on sleep duration were collected every 2 years, and covariates (eg, child sleep problems, maternal education) were assessed at baseline. Growth mixture modeling was used to identify distinct longitudinal patterns of sleep duration and significant covariates. Linear regression examined whether the distinct sleep patterns were significantly associated with health-related quality of life.\n\n\nRESULTS\nThe results identified 4 distinct sleep duration patterns: typical sleepers (40.6%), initially short sleepers (45.2%), poor sleepers (2.5%), and persistent short sleepers (11.6%). Factors such as child sleep problems, child irritability, maternal employment, household financial hardship, and household size distinguished between the trajectories. The results demonstrated that the trajectories had different implications for health-related quality of life. For instance, persistent short sleepers had poorer physical, emotional, and social health than typical sleepers.\n\n\nCONCLUSIONS\nThe results provide a novel insight into the nature of child sleep and the implications of differing sleep patterns for health-related quality of life. The findings could inform the development of effective interventions to promote healthful sleep patterns in children.",
"title": ""
},
{
"docid": "7d8884a7f6137068f8ede464cf63da5b",
"text": "Object detection and localization is a crucial step for inspection and manipulation tasks in robotic and industrial applications. We present an object detection and localization scheme for 3D objects that combines intensity and depth data. A novel multimodal, scale- and rotation-invariant feature is used to simultaneously describe the object's silhouette and surface appearance. The object's position is determined by matching scene and model features via a Hough-like local voting scheme. The proposed method is quantitatively and qualitatively evaluated on a large number of real sequences, proving that it is generic and highly robust to occlusions and clutter. Comparisons with state of the art methods demonstrate comparable results and higher robustness with respect to occlusions.",
"title": ""
},
{
"docid": "aded7e5301d40faf52942cd61a1b54ba",
"text": "In this paper, a lower limb rehabilitation robot in sitting position is developed for patients with muscle weakness. The robot is a stationary based type which is able to perform various types of therapeutic exercises. For safe operation, the robot's joint is driven by two-stage cable transmission while the balance mechanism is used to reduce actuator size and transmission ratio. Control algorithms for passive, assistive and resistive exercises are designed to match characteristics of each therapeutic exercises and patients with different muscle strength. Preliminary experiments conducted with a healthy subject have demonstrated that the robot and the control algorithms are promising for lower limb rehabilitation task.",
"title": ""
},
{
"docid": "67a958a34084061e3bcd7964790879c4",
"text": "Researchers spent lots of time in searching published articles relevant to their project. Though having similar interest in projects researches perform individual and time overwhelming searches. But researchers are unable to control the results obtained from earlier search process, whereas they can share the results afterwards. We propose a research paper recommender system by enhancing existing search engines with recommendations based on preceding searches performed by others researchers that avert time absorbing searches. Top-k query algorithm retrieves best answers from a potentially large record set so that we find the most accurate records from the given record set that matches the filtering keywords. KeywordsRecommendation System, Personalization, Profile, Top-k query, Steiner Tree",
"title": ""
},
{
"docid": "7bd901463614409eee12d6968e4f4d19",
"text": "This study investigated the inactivation of two antibiotic resistance genes (ARGs)-sul1 and tetG, and the integrase gene of class 1 integrons-intI1 by chlorination, ultraviolet (UV), and ozonation disinfection. Inactivation of sul1, tetG, and intI1 underwent increased doses of three disinfectors, and chlorine disinfection achieved more inactivation of ARGs and intI1 genes (chlorine dose of 160 mg/L with contact time of 120 min for 2.98-3.24 log reductions of ARGs) than UV irradiation (UV dose of 12,477 mJ/cm(2) for 2.48-2.74 log reductions of ARGs) and ozonation disinfection (ozonation dose of 177.6 mg/L for 1.68-2.55 log reductions of ARGs). The 16S rDNA was more efficiently removed than ARGs by ozone disinfection. The relative abundance of selected genes (normalized to 16S rDNA) increased during ozonation and with low doses of UV and chlorine disinfection. Inactivation of sul1 and tetG showed strong positive correlations with the inactivation of intI1 genes (for sul1, R (2) = 0.929 with p < 0.01; for tetG, R (2) = 0.885 with p < 0.01). Compared to other technologies (ultraviolet disinfection, ozonation disinfection, Fenton oxidation, and coagulation), chlorination is an alternative method to remove ARGs from wastewater effluents. At a chlorine dose of 40 mg/L with 60 min contact time, the selected genes inactivation efficiency could reach 1.65-2.28 log, and the cost was estimated at 0.041 yuan/m(3).",
"title": ""
},
{
"docid": "d992300ed0d3e95c14eb115f0f3b09ac",
"text": "The purpose of this paper is to determine those factors that influence the adoption of internet banking services in Tunisia. A theoretical model is provided that conceptualizes and links different factors influencing the adoption of internet banking. A total of 253 respondents in Tunisia were sampled for responding: 95 were internet bank users, 158 were internet bank non users. Factor analyses and regression technique are employed to study the relationship. The results of the model tested clearly that use of internet banking in Tunisia is influenced most strongly by convenience, risk, security and prior internet knowledge. Only information on online banking did not affect intention to use internet banking service in Tunisia. The results also propose that demographic factors impact significantly internet banking behaviour, specifically, occupation and instruction. Finally, this paper suggests that an understanding the factors affecting intention to use internet banking is very important to the practitioners who plan and promote new forms of banking in the current competitive market.",
"title": ""
},
{
"docid": "1e59c6cc3dcc34ec26b912a5162635ed",
"text": "Finding clusters with widely differing sizes, shapes and densities in presence of noise and outliers is a challenging job. The DBSCAN is a versatile clustering algorithm that can find clusters with differing sizes and shapes in databases containing noise and outliers. But it cannot find clusters based on difference in densities. We extend the DBSCAN algorithm so that it can also detect clusters that differ in densities. Local densities within a cluster are reasonably homogeneous. Adjacent regions are separated into different clusters if there is significant change in densities. Thus the algorithm attempts to find density based natural clusters that may not be separated by any sparse region. Computational complexity of the algorithm is O(n log n).",
"title": ""
},
{
"docid": "fd2e7025271565927f43784f0c69c3fb",
"text": "In this paper, we have proposed a fingerprint orientation model based on 2D Fourier expansions (FOMFE) in the phase plane. The FOMFE does not require prior knowledge of singular points (SPs). It is able to describe the overall ridge topology seamlessly, including the SP regions, even for noisy fingerprints. Our statistical experiments on a public database show that the proposed FOMFE can significantly improve the accuracy of fingerprint feature extraction and thus that of fingerprint matching. Moreover, the FOMFE has a low-computational cost and can work very efficiently on large fingerprint databases. The FOMFE provides a comprehensive description for orientation features, which has enabled its beneficial use in feature-related applications such as fingerprint indexing. Unlike most indexing schemes using raw orientation data, we exploit FOMFE model coefficients to generate the feature vector. Our indexing experiments show remarkable results using different fingerprint databases",
"title": ""
},
{
"docid": "3819259ca40ee3c075e80bdf2ded4475",
"text": "BACKGROUND\nThe extant major psychiatric classifications DSM-IV, and ICD-10, are atheoretical and largely descriptive. Although this achieves good reliability, the validity of a medical diagnosis would be greatly enhanced by an understanding of risk factors and clinical manifestations. In an effort to group mental disorders on the basis of aetiology, five clusters have been proposed. This paper considers the validity of the fourth cluster, emotional disorders, within that proposal.\n\n\nMETHOD\nWe reviewed the literature in relation to 11 validating criteria proposed by a Study Group of the DSM-V Task Force, as applied to the cluster of emotional disorders.\n\n\nRESULTS\nAn emotional cluster of disorders identified using the 11 validators is feasible. Negative affectivity is the defining feature of the emotional cluster. Although there are differences between disorders in the remaining validating criteria, there are similarities that support the feasibility of an emotional cluster. Strong intra-cluster co-morbidity may reflect the action of common risk factors and also shared higher-order symptom dimensions in these emotional disorders.\n\n\nCONCLUSION\nEmotional disorders meet many of the salient criteria proposed by the Study Group of the DSM-V Task Force to suggest a classification cluster.",
"title": ""
},
{
"docid": "ed66f39bda7ccd5c76f64543b5e3abd6",
"text": "BACKGROUND\nLoeys-Dietz syndrome is a recently recognized multisystemic disorder caused by mutations in the genes encoding the transforming growth factor-beta receptor. It is characterized by aggressive aneurysm formation and vascular tortuosity. We report the musculoskeletal demographic, clinical, and imaging findings of this syndrome to aid in its diagnosis and treatment.\n\n\nMETHODS\nWe retrospectively analyzed the demographic, clinical, and imaging data of sixty-five patients with Loeys-Dietz syndrome seen at one institution from May 2007 through December 2008.\n\n\nRESULTS\nThe patients had a mean age of twenty-one years, and thirty-six of the sixty-five patients were less than eighteen years old. Previous diagnoses for these patients included Marfan syndrome (sixteen patients) and Ehlers-Danlos syndrome (two patients). Spinal and foot abnormalities were the most clinically important skeletal findings. Eleven patients had talipes equinovarus, and nineteen patients had cervical anomalies and instability. Thirty patients had scoliosis (mean Cobb angle [and standard deviation], 30 degrees +/- 18 degrees ). Two patients had spondylolisthesis, and twenty-two of thirty-three who had computed tomography scans had dural ectasia. Thirty-five patients had pectus excavatum, and eight had pectus carinatum. Combined thumb and wrist signs were present in approximately one-fourth of the patients. Acetabular protrusion was present in approximately one-third of the patients and was usually mild. Fourteen patients had previous orthopaedic procedures, including scoliosis surgery, cervical stabilization, clubfoot correction, and hip arthroplasty. Features of Loeys-Dietz syndrome that are important clues to aid in making this diagnosis include bifid broad uvulas, hypertelorism, substantial joint laxity, and translucent skin.\n\n\nCONCLUSIONS\nPatients with Loeys-Dietz syndrome commonly present to the orthopaedic surgeon with cervical malformations, spinal and foot deformities, and findings in the craniofacial and cutaneous systems.\n\n\nLEVEL OF EVIDENCE\nTherapeutic Level IV. See Instructions to Authors for a complete description of levels of evidence.",
"title": ""
}
] |
scidocsrr
|
ca27952091cbd42798a0c86b4f80432e
|
Question Answering with Subgraph Embeddings
|
[
{
"docid": "b796a957545aa046bad14d44c4578700",
"text": "Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced “sibling” precision metric, where our method also obtains excellent results.",
"title": ""
}
] |
[
{
"docid": "9de8fe455203b65082dca9acbf5f330d",
"text": "Traffic Sign recognition system is a part of driving assistance system that automatically alerts and informs the driver of the traffic signs ahead. In this paper an efficient real time sign detection system is proposed for Indian traffic signs. Car cameras that capture video are integrated with an in-vehicle computing device. Image frames may be blurred and corrupted by Gaussian noise due to motion of vehicle and atmospheric turbulence. Hence Image enhancement is done using median filter and nonlinear Lucy-Richardson for de-convolution. Colour segmentation using YCbCr colour space along with shape filtering through template matching of colour detected candidates are used to detect sign from images as colour and shape easily distinguishes a sign from its background. The classification module determines the type of detected road signs using Multi-layer Perceptron neural networks.",
"title": ""
},
{
"docid": "4553d8fa9c5f48dae4a1a62a11ed4257",
"text": "Playing violent video games have been linked to long-term emotional desensitization. We hypothesized that desensitization effects in excessive users of violent video games should lead to decreased brain activations to highly salient emotional pictures in emotional sensitivity brain regions. Twenty-eight male adult subjects showing excessive long-term use of violent video games and age and education matched control participants were examined in two experiments using standardized emotional pictures of positive, negative and neutral valence. No group differences were revealed even at reduced statistical thresholds which speaks against desensitization of emotion sensitive brain regions as a result of excessive use of violent video games.",
"title": ""
},
{
"docid": "fa01916a99924eedbeed5127ee653a76",
"text": "Large-scale real-world graphs are known to have highly skewed vertex degree distribution and highly skewed edge weight distribution. Existing vertex-centric iterative graph computation models suffer from a number of serious problems: (1) poor performance of parallel execution due to inherent workload imbalance at vertex level; (2) inefficient CPU resource utilization due to short execution time for low-degree vertices compared to the cost of in-memory or on-disk vertex access; and (3) incapability of pruning insignificant vertices or edges to improve the computational performance. In this paper, we address the above technical challenges by designing and implementing a scalable, efficient, and provably correct two-tier graph parallel processing system, GraphTwist. At storage and access tier, GraphTwist maximizes parallel efficiency by employing three graph parallel abstractions for partitioning a big graph by slice, strip or dice based partitioning techniques. At computation tier, GraphTwist presents two utility-aware pruning strategies: slice pruning and cut pruning, to further improve the computational performance while preserving the computational utility defined by graph applications. Theoretic analysis is provided to quantitatively prove that iterative graph computations powered by utility-aware pruning techniques can achieve a very good approximation with bounds on the introduced error.",
"title": ""
},
{
"docid": "3bba773dc33ef83b975dd15803fac957",
"text": "In competitive games where players' skill levels are mis-matched, the play experience can be unsatisfying for both stronger and weaker players. Player balancing provides assistance for less-skilled players in order to make games more competitive and engaging. Although player balancing can be seen in many real-world games, there is little work on the design and effectiveness of these techniques outside of shooting games. In this paper we provide new knowledge about player balancing in the popular and competitive rac-ing genre. We studied issues of noticeability and balancing effectiveness in a prototype racing game, and tested the effects of several balancing techniques on performance and play experience. The techniques significantly improved the balance of player performance, were preferred by both experts and novices, increased novices' feelings of competi-tiveness, and did not detract from experts' experience. Our results provide new understanding of the design and use of player balancing for racing games, and provide novel tech-niques that can also be applied to other genres.",
"title": ""
},
{
"docid": "5a4315e5887bdbb6562e76b54d03beeb",
"text": "A combination of conventional cross sectional process and device simulations combined with top down and 3D device simulations have been used to design and optimise the integration of a 100V Lateral DMOS (LDMOS) device for high side bridge applications. This combined simulation approach can streamline the device design process and gain important information about end effects which are lost from 2D cross sectional simulations. Design solutions to negate detrimental end effects are proposed and optimised by top down and 3D simulations and subsequently proven on tested silicon.",
"title": ""
},
{
"docid": "9f1193bb28be16e2bdcb8b8b6985f300",
"text": "Tissue engineering is a newly emerging biomedical technology, which aids and increases the repair and regeneration of deficient and injured tissues. It employs the principles from the fields of materials science, cell biology, transplantation, and engineering in an effort to treat or replace damaged tissues. Tissue engineering and development of complex tissues or organs, such as heart, muscle, kidney, liver, and lung, are still a distant milestone in twenty-first century. Generally, there are four main challenges in tissue engineering which need optimization. These include biomaterials, cell sources, vascularization of engineered tissues, and design of drug delivery systems. Biomaterials and cell sources should be specific for the engineering of each tissue or organ. On the other hand, angiogenesis is required not only for the treatment of a variety of ischemic conditions, but it is also a critical component of virtually all tissue-engineering strategies. Therefore, controlling the dose, location, and duration of releasing angiogenic factors via polymeric delivery systems, in order to ultimately better mimic the stem cell niche through scaffolds, will dictate the utility of a variety of biomaterials in tissue regeneration. This review focuses on the use of polymeric vehicles that are made of synthetic and/or natural biomaterials as scaffolds for three-dimensional cell cultures and for locally delivering the inductive growth factors in various formats to provide a method of controlled, localized delivery for the desired time frame and for vascularized tissue-engineering therapies.",
"title": ""
},
{
"docid": "9a6249777e0137121df0c02cffe63b73",
"text": "With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation.",
"title": ""
},
{
"docid": "b4c5ddab0cb3e850273275843d1f264f",
"text": "The increase of malware that are exploiting the Internet daily has become a serious threat. The manual heuristic inspection of malware analysis is no longer considered effective and efficient compared against the high spreading rate of malware. Hence, automated behavior-based malware detection using machine learning techniques is considered a profound solution. The behavior of each malware on an emulated (sandbox) environment will be automatically analyzed and will generate behavior reports. These reports will be preprocessed into sparse vector models for further machine learning (classification). The classifiers used in this research are k-Nearest Neighbors (kNN), Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), and Multilayer Perceptron Neural Network (MlP). Based on the analysis of the tests and experimental results of all the 5 classifiers, the overall best performance was achieved by J48 decision tree with a recall of 95.9%, a false positive rate of 2.4%, a precision of 97.3%, and an accuracy of 96.8%. In summary, it can be concluded that a proof-of-concept based on automatic behavior-based malware analysis and the use of machine learning techniques could detect malware quite effectively and efficiently.",
"title": ""
},
{
"docid": "17d7936917bdee4cfdc9e92db262baa7",
"text": "RDF models are widely used in the web of data due to their flexibility and similarity to graph patterns. Because of the growing use of RDFs, their volumes and contents are increasing. Therefore, processing of such massive amount of data on a single machine is not efficient enough, because of the response time and limited hardware resources. A common approach to overcome this limitation is cluster processing and huge datasets could benefit distributed cluster processing on Apache Hadoop. Because of using too much of hard disks, the processing time is usually inadequate. In this paper, we propose a partitiong approach based on Apache Spark for rapid processing of RDF data models. A key feature of Apache Spark is using main memory instead of hard disk, so the speed of data processing in our method is improved. We have evaluated the proposed method by runing SQL queris on RDF data which partitioned on the cluster and demonstrates improved performance.",
"title": ""
},
{
"docid": "ba886b9d8c3fbd02e5bae1b2cd423d00",
"text": "There is increasing evidence that users' characteristics such as cognitive abilities and personality have an impact on the effectiveness of information visualization techniques. This paper investigates the relationship between such characteristics and fine-grained user attention patterns. In particular, we present results from an eye tracking user study involving bar graphs and radar graphs, showing that a user's cognitive abilities such as perceptual speed and verbal working memory have a significant impact on gaze behavior, both in general and in relation to task difficulty and visualization type. These results are discussed in view of our long-term goal of designing information visualisation systems that can dynamically adapt to individual user characteristics.",
"title": ""
},
{
"docid": "b0e3249bbea278ceee2154aba5ea99d8",
"text": "Much of the current research in learning Bayesian Networks fails to eeectively deal with missing data. Most of the methods assume that the data is complete, or make the data complete using fairly ad-hoc methods; other methods do deal with missing data but learn only the conditional probabilities, assuming that the structure is known. We present a principled approach to learn both the Bayesian network structure as well as the conditional probabilities from incomplete data. The proposed algorithm is an iterative method that uses a combination of Expectation-Maximization (EM) and Imputation techniques. Results are presented on synthetic data sets which show that the performance of the new algorithm is much better than ad-hoc methods for handling missing data.",
"title": ""
},
{
"docid": "c796a0c9fd09f795a32f2ef09b1c0405",
"text": "Vectors of data are at the heart of machine learning and data mining. Recently, vector quantization methods have shown great promise in reducing both the time and space costs of operating on vectors. We introduce a vector quantization algorithm that can compress vectors over 12x faster than existing techniques while also accelerating approximate vector operations such as distance and dot product computations by up to 10x. Because it can encode over 2GB of vectors per second, it makes vector quantization cheap enough to employ in many more circumstances. For example, using our technique to compute approximate dot products in a nested loop can multiply matrices faster than a state-of-the-art BLAS implementation, even when our algorithm must first compress the matrices. In addition to showing the above speedups, we demonstrate that our approach can accelerate nearest neighbor search and maximum inner product search by over 100x compared to floating point operations and 10x compared to other vector quantization methods. Our approximate Euclidean distance and dot product computations are not only faster than those of related algorithms with slower encodings, but also faster than Hamming distance computations, which have direct hardware support on the tested platforms. We also assess the errors of our algorithm's approximate distances and dot products, and find that it is competitive with existing, slower vector quantization algorithms.",
"title": ""
},
{
"docid": "a0666650af31b3a9fa47d4f38010d43d",
"text": "Graph learning is currently dominated by graph kernels, which, while powerful, suffer some significant limitations. Convolutional Neural Networks (CNNs) offer a very appealing alternative, but processing graphs with CNNs is not trivial. To address this challenge, many sophisticated extensions of CNNs have recently been introduced. In this paper, we reverse the problem: rather than proposing yet another graph CNNmodel, we introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs. Experiments reveal that our method is more accurate than state-of-the-art graph kernels and graph CNNs on 4 out of 6 real-world datasets (with and without continuous node attributes), and close elsewhere. Our approach is also preferable to graph kernels in terms of time complexity. Code and data are publicly available1.",
"title": ""
},
{
"docid": "92d670340cfbfc29bc4881acbc0b44dd",
"text": "Truly universal helper robots capable of coping with unknown, unstructured environments must be capable of spatial reasoning, i.e., establishing geometric relations between objects and locations, expressing those in terms understandable by humans. It is therefore desirable that spatial and semantic environment representations are tightly interlinked. 3D robotic mapping and the generation of consistent metric representations of space is highly useful for navigation and exploration, but they do not capture symbol-level information about the environment. This is, however, essential for reasoning, and enables interaction via natural language, which is arguably the most common and natural communication channel used and understood by humans. This article presents a review of research in three major fields relevant for this discussion of spatial reasoning and interaction. Firstly, dialogue systems are an integral part of modern approaches to situated human-robot interaction. Secondly, interactive robots must be equipped with environment representations and reasoning methods that is suitable for both navigation and task fulfillment, as well as for interaction with human partners. Thirdly, at the interface between these domains are systems that ground language in systemic environment representation and which allow the integration of information from natural language descriptions into robotic maps. For each of these areas, important approaches are outlined and relations between the fields are highlighted, and challenging applications as well as open problems are discussed.",
"title": ""
},
{
"docid": "f4c78c6f0424458cbeea67a498679344",
"text": "In the United States, the office of the Medical Examiner-Coroner is responsible for investigating all sudden and unexpected deaths and deaths by violence. Its jurisdiction includes deaths during the arrest procedures and deaths in police custody. Police officers are sometimes required to subdue and restrain an individual who is violent, often irrational and resisting arrest. This procedure may cause harm to the subject and to the arresting officers. This article deals with our experiences in Los Angeles and reviews the policies and procedures for investigating and determining the cause and manner of death in such cases. We have taken a \"quality improvement approach\" to the study of these deaths due to restraint asphyxia and related officer involved deaths, Since 1999, through interagency coordination with law enforcement agencies similar to the hospital healthcare quality improvement meeting program, detailed information related to the sequence of events in these cases and ideas for improvements to prevent such deaths are discussed.",
"title": ""
},
{
"docid": "f392b4ba1cface8be439bf86a3e4c2bd",
"text": "STUDY DESIGN\nCase-control study comparing sagittal plane segmental motion in women (n = 34) with chronic whiplash-associated disorders, Grades I-II, with women (n = 35) with chronic insidious onset neck pain and with a normal database of sagittal plane rotational and translational motion.\n\n\nOBJECTIVE\nTo reveal whether women with chronic whiplash-associated disorders, Grades I-II, demonstrate evidence of abnormal segmental motions in the cervical spine.\n\n\nSUMMARY OF BACKGROUND DATA\nIt is hypothesized that unphysiological spinal motion experienced during an automobile accident may result in a persistent disturbance of segmental motion. It is not known whether patients with chronic whiplash-associated disorders differ from patients with chronic insidious onset neck pain with respect to segmental mobility.\n\n\nMETHODS\nLateral radiographic views were taken in assisted maximal flexion and extension. A new measurement protocol determined rotational and translational motions of segments C3-C4 and C5-C6 with high precision. Segmental motion was compared with normal data as well as among groups.\n\n\nRESULTS\nIn the whiplash-associated disorders group, the C3-C4 and C4-C5 segments showed significantly increased rotational motions. Translational motions within each segment revealed a significant deviation from normal at the C3-C4 segment in the whiplash-associated disorders and insidious onset neck pain groups and at the C5-C6 segment in the whiplash-associated disorders group. Significantly more women in the whiplash-associated disorders group (35.3%) had abnormal increased segmental motions compared to the insidious onset neck pain group (8.6%) when both the rotational and the translational parameters were analyzed. When the translational parameter was analyzed separately, no significant difference was found between groups, or 17.6% (whiplash-associated disorders group) and 8.6% (insidious onset neck pain group), respectively.\n\n\nCONCLUSION\nHypermobility in the lower cervical spine segments in 12 out of 34 patients with chronic whiplash-associated disorders in this study point to injury caused by the accident. This subgroup, identified by the new radiographic protocol, might need a specific therapeutic intervention.",
"title": ""
},
{
"docid": "695e5694fd09292577552ad6eeb08713",
"text": "For many robotics and intelligent vehicle applications, detection and tracking multiple objects (DATMO) is one of the most important components. However, most of the DATMO applications have difficulty in applying real-world applications due to high computational complexity. In this paper, we propose an efficient DATMO framework that fully employs the complementary information from the color camera and the 3D LIDAR. For high efficiency, we present a segmentation scheme by using both 2D and 3D information which gives accurate segments very quickly. In our experiments, we show that our framework can achieve the faster speed (~4Hz) than the state-of-the-art methods reported in KITTI benchmark (>1Hz).",
"title": ""
},
{
"docid": "f8d4607784072423db18ce4fb6a819b2",
"text": "Organic solar cell research has developed during the past 30 years, but especially in the last decade it has attracted scientific and economic interest triggered by a rapid increase in power conversion efficiencies. This was achieved by the introduction of new materials, improved materials engineering, and more sophisticated device structures. Today, solar power conversion efficiencies in excess of 3% have been accomplished with several device concepts. Though efficiencies of these thin-film organic devices have not yet reached those of their inorganic counterparts ( ≈ 10–20%); the perspective of cheap production (employing, e.g., roll-to-roll processes) drives the development of organic photovoltaic devices further in a dynamic way. The two competitive production techniques used today are either wet solution processing or dry thermal evaporation of the organic constituents. The field of organic solar cells profited well from the development of light-emitting diodes based on similar technologies, which have entered the market recently. We review here the current status of the field of organic solar cells and discuss different production technologies as well as study the important parameters to improve their performance.",
"title": ""
},
{
"docid": "c1eefd9a127a0ea9c7e43fdfbdba689e",
"text": "We present TRANX, a transition-based neural semantic parser that maps natural language (NL) utterances into formal meaning representations (MRs). TRANX uses a transition system based on the abstract syntax description language for the target MR, which gives it two major advantages: (1) it is highly accurate, using information from the syntax of the target MR to constrain the output space and model the information flow, and (2) it is highly generalizable, and can easily be applied to new types of MR by just writing a new abstract syntax description corresponding to the allowable structures in the MR. Experiments on four different semantic parsing and code generation tasks show that our system is generalizable, extensible, and effective, registering strong results compared to existing neural semantic parsers.1",
"title": ""
},
{
"docid": "f4b5b71398e3a40c76b1f58d3f05a83d",
"text": "Creativity and innovation in any organization are vital to its successful performance. The authors review the rapidly growing body of research in this area with particular attention to the period 2002 to 2013, inclusive. Conceiving of both creativity and innovation as being integral parts of essentially the same process, we propose a new, integrative definition. We note that research into creativity has typically examined the stage of idea generation, whereas innovation studies have commonly also included the latter phase of idea implementation. The authors discuss several seminal theories of creativity and innovation, then apply a comprehensive levels-of-analysis framework to review extant research into individual, team, organizational, and multi-level innovation. Key measurement characteristics of the reviewed studies are then noted. In conclusion, we propose a guiding framework for future research comprising eleven major themes and sixty specific questions for future studies. INNOVATION AND CREATIVITY 3 INNOVATION AND CREATIVITY IN ORGANIZATIONS: A STATE-OF-THE-SCIENCE REVIEW, PROSPECTIVE COMMENTARY, AND",
"title": ""
}
] |
scidocsrr
|
9b37e38947297cb5cea734937916f552
|
Gradient Adversarial Training of Neural Networks
|
[
{
"docid": "0141a93f93a7cf3c8ee8fd705b0a9657",
"text": "We systematically explore regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. Furthermore, we connect a maximum entropy based confidence penalty to label smoothing through the direction of the KL divergence. We exhaustively evaluate the proposed confidence penalty and label smoothing on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT’14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and the confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyperparameters, suggesting the wide applicability of these regularizers.",
"title": ""
},
{
"docid": "06e8a075db8e18ca0b3a1dd91b89af45",
"text": "Deep neural networks have proven remarkably effective at solving many classification problems, but have been criticized recently for two major weaknesses: the reasons behind their predictions are uninterpretable, and the predictions themselves can often be fooled by small adversarial perturbations. These problems pose major obstacles for the adoption of neural networks in domains that require security or transparency. In this work, we evaluate the effectiveness of defenses that differentiably penalize the degree to which small changes in inputs can alter model predictions. Across multiple attacks, architectures, defenses, and datasets, we find that neural networks trained with this input gradient regularization exhibit robustness to transferred adversarial examples generated to fool all of the other models. We also find that adversarial examples generated to fool gradient-regularized models fool all other models equally well, and actually lead to more “legitimate,” interpretable misclassifications as rated by people (which we confirm in a human subject experiment). Finally, we demonstrate that regularizing input gradients makes them more naturally interpretable as rationales for model predictions. We conclude by discussing this relationship between interpretability and robustness in deep neural",
"title": ""
},
{
"docid": "5759152f6e9a9cb1e6c72857e5b3ec54",
"text": "Deep multitask networks, in which one neural network produces multiple predictive outputs, can offer better speed and performance than their single-task counterparts but are challenging to train properly. We present a gradient normalization (GradNorm) algorithm that automatically balances training in deep multitask models by dynamically tuning gradient magnitudes. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting across multiple tasks when compared to single-task networks, static baselines, and other adaptive multitask loss balancing techniques. GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter α. Thus, what was once a tedious search process that incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we will demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.",
"title": ""
}
] |
[
{
"docid": "559637a4f8f5b99bb3210c5c7d03d2e0",
"text": "Third-generation personal navigation assistants (PNAs) (i.e., those that provide a map, the user's current location, and directions) must be able to reconcile the user's location with the underlying map. This process is known as map matching. Most existing research has focused on map matching when both the user's location and the map are known with a high degree of accuracy. However, there are many situations in which this is unlikely to be the case. Hence, this paper considers map matching algorithms that can be used to reconcile inaccurate locational data with an inaccurate map/network. Ó 2000 Published by Elsevier Science Ltd.",
"title": ""
},
{
"docid": "3e6010f951eba0c82e8678f7d076162c",
"text": "In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, face recognition grand challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality",
"title": ""
},
{
"docid": "a72932cd98f425eafc19b9786da4319d",
"text": "Recommender systems are changing from novelties used by a few E-commerce sites, to serious business tools that are re-shaping the world of E-commerce. Many of the largest commerce Web sites are already using recommender systems to help their customers find products to purchase. A recommender system learns from a customer and recommends products that she will find most valuable from among the available products. In this paper we present an explanation of how recommender systems help E-commerce sites increase sales, and analyze six sites that use recommender systems including several sites that use more than one recommender system. Based on the examples, we create a taxonomy of recommender systems, including the interfaces they present to customers, the technologies used to create the recommendations, and the inputs they need from customers. We conclude with ideas for new applications of recommender systems to E-commerce.",
"title": ""
},
{
"docid": "288f32db8af5789e6e6049fa4cec0334",
"text": "Trusted execution environments, and particularly the Software Guard eXtensions (SGX) included in recent Intel x86 processors, gained significant traction in recent years. A long track of research papers, and increasingly also realworld industry applications, take advantage of the strong hardware-enforced confidentiality and integrity guarantees provided by Intel SGX. Ultimately, enclaved execution holds the compelling potential of securely offloading sensitive computations to untrusted remote platforms. We present Foreshadow, a practical software-only microarchitectural attack that decisively dismantles the security objectives of current SGX implementations. Crucially, unlike previous SGX attacks, we do not make any assumptions on the victim enclave’s code and do not necessarily require kernel-level access. At its core, Foreshadow abuses a speculative execution bug in modern Intel processors, on top of which we develop a novel exploitation methodology to reliably leak plaintext enclave secrets from the CPU cache. We demonstrate our attacks by extracting full cryptographic keys from Intel’s vetted architectural enclaves, and validate their correctness by launching rogue production enclaves and forging arbitrary local and remote attestation responses. The extracted remote attestation keys affect millions of devices.",
"title": ""
},
{
"docid": "a7de4a506e8ef6b1e62c7138fea9cee9",
"text": "In this paper, an optimized codebook design for a non-orthogonal multiple access scheme, called sparse code multiple access (SCMA), is presented. Unlike the low density signature (LDS) systems, in SCMA systems, the procedure of bits to QAM symbol mapping and spreading are combined together, and the incoming bits are directly mapped to the codewords of the SCMA codebook sets. Each layer or user has its dedicated codebook and the codebooks are all different. An improved method based on star-QAM signaling constellations is proposed here for designing the SCMA codebooks. It is demonstrated that the new method can greatly improve the BER performance without sacrificing the low detection complexity, compared to the existing codebooks and LDS.",
"title": ""
},
{
"docid": "458174ef63e195104e0efb71ca6043a7",
"text": "We consider classification problems in which the label space has structure. A common example is hierarchical label spaces, corresponding to the case where one label subsumes another (e.g., animal subsumes dog). But labels can also be mutually exclusive (e.g., dog vs cat) or unrelated (e.g., furry, carnivore). To jointly model hierarchy and exclusion relations, the notion of a HEX (hierarchy and exclusion) graph was introduced in [8]. This combined a conditional random field (CRF) with a deep neural network (DNN), resulting in state of the art results when applied to visual object classification problems where the training labels were drawn from different levels of the ImageNet hierarchy (e.g., an image might be labeled with the basic level category \"dog\", rather than the more specific label \"husky\"). In this paper, we extend the HEX model to allow for soft or probabilistic relations between labels, which is useful when there is uncertainty about the relationship between two labels (e.g., an antelope is \"sort of\" furry, but not to the same degree as a grizzly bear). We call our new model pHEX, for probabilistic HEX. We show that the pHEX graph can be converted to an Ising model, which allows us to use existing off-the-shelf inference methods (in contrast to the HEX method, which needed specialized inference algorithms). Experimental results show significant improvements in a number of large-scale visual object classification tasks, outperforming the previous HEX model.",
"title": ""
},
{
"docid": "9546f8a74577cc1119e48fae0921d3cf",
"text": "Learning latent representations from long text sequences is an important first step in many natural language processing applications. Recurrent Neural Networks (RNNs) have become a cornerstone for this challenging task. However, the quality of sentences during RNN-based decoding (reconstruction) decreases with the length of the text. We propose a sequence-to-sequence, purely convolutional and deconvolutional autoencoding framework that is free of the above issue, while also being computationally efficient. The proposed method is simple, easy to implement and can be leveraged as a building block for many applications. We show empirically that compared to RNNs, our framework is better at reconstructing and correcting long paragraphs. Quantitative evaluation on semi-supervised text classification and summarization tasks demonstrate the potential for better utilization of long unlabeled text data.",
"title": ""
},
{
"docid": "d64a0520a0cb49b1906d1d343ca935ec",
"text": "A 3D LTCC (low temperature co-fired ceramic) millimeter wave balun using asymmetric structure was investigated in this paper. The proposed balun consists of embedded multilayer microstrip and CPS (coplanar strip) lines. It was designed at 40GHz. The measured insertion loss of the back-to-back balanced transition is -1.14dB, thus the estimated insertion loss of each device is -0.57dB including the CPS line loss. The 10dB return loss bandwidth of the unbalanced back-to-back transition covers the frequency range of 17.3/spl sim/46.6GHz (91.7%). The area occupied by this balun is 0.42 /spl times/ 0.066/spl lambda//sub 0/ (2.1 /spl times/ 0.33mm/sup 2/). The high performances have been achieved using the low loss and relatively high dielectric constant of LTCC (/spl epsiv//sub r/=5.4, tan/spl delta/=0.0015 at 35GHz) and a 3D stacked configuration. This balun can be used as a transition of microstrip-to-CPS and vice-versa and insures also an impedance transformation from 50 to 110 Ohm for an easy integration with a high input impedance antenna. This is the first reported 40 GHz wideband 3D LTCC balun using asymmetric structure to balance the output amplitude and phase difference.",
"title": ""
},
{
"docid": "5e82e67ebb99cac1b3874bf08e03b550",
"text": "Nonsmooth nonnegative matrix factorization (nsNMF) is capable of producing more localized, less overlapped feature representations than other variants of NMF while keeping satisfactory fit to data. However, nsNMF as well as other existing NMF methods are incompetent to learn hierarchical features of complex data due to its shallow structure. To fill this gap, we propose a deep nsNMF method coined by the fact that it possesses a deeper architecture compared with standard nsNMF. The deep nsNMF not only gives part-based features due to the nonnegativity constraints but also creates higher level, more abstract features by combing lower level ones. The in-depth description of how deep architecture can help to efficiently discover abstract features in dnsNMF is presented, suggesting that the proposed model inherits the major advantages from both deep learning and NMF. Extensive experiments demonstrate the standout performance of the proposed method in clustering analysis.",
"title": ""
},
{
"docid": "70b2d88844a1390c1768f4e2adedf392",
"text": "In this paper, we explore the potential of extreme learning machine (ELM) and kernel ELM (KELM) for early diagnosis of Parkinson’s disease (PD). In the proposed method, the key parameters including the number of hidden neuron and type of activation function in ELM, and the constant parameter C and kernel parameter γ in KELM are investigated in detail. With the obtained optimal parameters, ELM and KELM manage to train the optimal predictive models for PD diagnosis. In order to further improve the performance of ELM and KELM models, feature selection techniques are implemented prior to the construction of the classification models. The effectiveness of the proposed method has been rigorously evaluated against the PD data set in terms of classification accuracy, sensitivity, specificity and the area under the ROC (receiver operating characteristic) curve (AUC). Compared to the existing methods in previous studies, the proposed method has achieved very promising classification accuracy via 10-fold cross-validation (CV) analysis, with the highest accuracy of 96.47% and average accuracy of 95.97% over 10 runs of 10-fold CV.",
"title": ""
},
{
"docid": "0a8150abf09c6551e4cd771d12ed66c1",
"text": "Sarcasm presents a negative meaning with positive expressions and is a non-literalistic expression. Sarcasm detection is an important task because it contributes directly to the improvement of the accuracy of sentiment analysis tasks. In this study, we propose a extraction method of sarcastic sentences in product reviews. First, we analyze sarcastic sentences in product reviews and classify the sentences into 8 classes by focusing on evaluation expressions. Next, we generate classification rules for each class and use them to extract sarcastic sentences. Our method consists of three stage, judgment processes based on rules for 8 classes, boosting rules and rejection rules. In the experiment, we compare our method with a baseline based on a simple rule. The experimental result shows the effectiveness of our method.",
"title": ""
},
{
"docid": "fa82e5790554b3c0a0b5bc8f8037d3f5",
"text": "With cloud computing, new services in information technology (IT) emerge from the convergence of business and technology perspectives which furnish users access to IT resources anytime and anywhere using pay-per-use fashion. Therefore, it should supply eminent functioning gain to the user and simultaneously ought to be advantageous for the cloud service provider. To accomplish this goal, many challenges have to be faced, where load balancing is one of them. The optimal selection of a resource for a particular job does not mean that the selected resource persists optimized for the whole execution of the job. The resource overloading/under-loading must be avoided which could be gained by appropriate load balancing mechanisms. However, to the best of our knowledge, despite the importance of load balancing techniques and mechanisms, there is not any comprehensive and systematic review about studying and analyzing its important techniques. Hence, this paper presents a systematic literature review of the existing load balancing techniques proposed so far. Detailed classifications have also been included based on different parameters which are relying upon the analysis of the existing techniques. Also, the advantages and disadvantages associated with several load balancing algorithms have been discussed and the important challenges of these algorithms are addressed so that more efficient load balancing techniques can be developed in future.",
"title": ""
},
{
"docid": "d540250c51e97622a10bcb29f8fde956",
"text": "With many advantages of rectangular waveguide and microstrip lines, substrate integrated waveguide (SIW) can be used for design of planar waveguide-like slot antenna. However, the bandwidth of this kind of antenna structure is limited. In this work, a parasitic dipole is introduced and coupled with the SIW radiate slot. The results have indicated that the proposed technique can enhance the bandwidth of the SIW slot antenna significantly. The measured bandwidth of fabricated antenna prototype is about 19%, indicating about 115% bandwidth enhancement than the ridged substrate integrated waveguide (RSIW) slot antenna.",
"title": ""
},
{
"docid": "4c53f39b4b3921df3d3569f15b85a694",
"text": "In this paper, we introduce the concept of a “Hegemony of Play,” to critique the way in which a complex layering of technological, commercial and cultural power structures have dominated the development of the digital game industry over the past 35 years, creating an entrenched status quo which ignores the needs and desires of “minority” players such as women and “non-gamers,” Who in fact represent the majority of the population. Drawing from the history of pre-digital games, we demonstrate that these practices have “narrowed the playing field,” and contrary to conventional wisdom, have actually hindered, rather than boosted, its commercial success. We reject the inevitability of these power structures, and urge those in game studies to “step up to the plate” and take a more proactive stance in questioning and critiquing the status of the Hegemony of Play.",
"title": ""
},
{
"docid": "e8f7006c9235e04f16cfeeb9d3c4f264",
"text": "Widespread deployment of biometric systems supporting consumer transactions is starting to occur. Smart consumer devices, such as tablets and phones, have the potential to act as biometric readers authenticating user transactions. However, the use of these devices in uncontrolled environments is highly susceptible to replay attacks, where these biometric data are captured and replayed at a later time. Current approaches to counter replay attacks in this context are inadequate. In order to show this, we demonstrate a simple replay attack that is 100% effective against a recent state-of-the-art face recognition system; this system was specifically designed to robustly distinguish between live people and spoofing attempts, such as photographs. This paper proposes an approach to counter replay attacks for face recognition on smart consumer devices using a noninvasive challenge and response technique. The image on the screen creates the challenge, and the dynamic reflection from the person's face as they look at the screen forms the response. The sequence of screen images and their associated reflections digitally watermarks the video. By extracting the features from the reflection region, it is possible to determine if the reflection matches the sequence of images that were displayed on the screen. Experiments indicate that the face reflection sequences can be classified under ideal conditions with a high degree of confidence. These encouraging results may pave the way for further studies in the use of video analysis for defeating biometric replay attacks on consumer devices.",
"title": ""
},
{
"docid": "aa1c565018371cf12e703e06f430776b",
"text": "We propose a graph-based semantic model for representing document content. Our method relies on the use of a semantic network, namely the DBpedia knowledge base, for acquiring fine-grained information about entities and their semantic relations, thus resulting in a knowledge-rich document model. We demonstrate the benefits of these semantic representations in two tasks: entity ranking and computing document semantic similarity. To this end, we couple DBpedia's structure with an information-theoretic measure of concept association, based on its explicit semantic relations, and compute semantic similarity using a Graph Edit Distance based measure, which finds the optimal matching between the documents' entities using the Hungarian method. Experimental results show that our general model outperforms baselines built on top of traditional methods, and achieves a performance close to that of highly specialized methods that have been tuned to these specific tasks.",
"title": ""
},
{
"docid": "dce8b7c654a8f034f51d651ad3eabb28",
"text": "We characterize and improve an existing infrared relative localization/communication module used to find range and bearing between robots in small-scale multi-robot systems. Modifications to the algorithms of the original system are suggested which offer better performance. A mathematical model which accurately describes the system is presented and allows us to predict the performance of modules with augmented sensorial capabilities. Finally, the usefulness of the module is demonstrated in a multi-robot self-localization task using both a realistic robotic simulator and real robots, and the performance is analyzed",
"title": ""
},
{
"docid": "544fcade5c59365e8b77d4c474950f5f",
"text": "The designs of dual-band and wide-band microstrip patch antennas with conical radiation patterns are presented in this paper. The antenna is composed of a square-ring patch that is shorted to the ground plane through four shorting walls. Three resonant modes with conical radiation patterns can be simultaneously excited in the antenna structure by a patch-loaded coaxial probe inside the square-ring patch, and they can be designed as a dual-band operation. Moreover, by adjusting the width of the shorting walls, the three modes can be coupled together to realize a wide-band operation. From the obtained results, the 10 dB impedance bandwidths at lower and higher operating frequencies are respectively 42 and 8% for the dual-band antenna design, and the wide-band design exhibits an impedance bandwidth of about 70%.",
"title": ""
},
{
"docid": "90f1fad6777882f64480d6c9b76446d3",
"text": "In the arms race of attackers and defenders, the defense is usually more challenging than the attack due to the unpredicted vulnerabilities and newly emerging attacks every day. Currently, most of existing malware detection solutions are individually proposed to address certain types of attacks or certain evasion techniques. Thus, it is desired to conduct a systematic investigation and evaluation of anti-malware solutions and tools based on different attacks and evasion techniques. In this paper, we first propose a meta model for Android malware to capture the common attack features and evasion features in the malware. Based on this model, we develop a framework, MYSTIQUE, to automatically generate malware covering four attack features and two evasion features, by adopting the software product line engineering approach. With the help of MYSTIQUE, we conduct experiments to 1) understand Android malware and the associated attack features as well as evasion techniques; 2) evaluate and compare the 57 off-the-shelf anti-malware tools, 9 academic solutions and 4 App market vetting processes in terms of accuracy in detecting attack features and capability in addressing evasion. Last but not least, we provide a benchmark of Android malware with proper labeling of contained attack and evasion features.",
"title": ""
},
{
"docid": "19e3338e136197d9d8ab57225f762161",
"text": "We study the problem of combining multiple bandit algorithms (that is, online learning algorithms with partial feedback) with the goal of creating a master algorithm that performs almost as well as the best base algorithm if it were to be run on its own. The main challenge is that when run with a master, base algorithms unavoidably receive much less feedback and it is thus critical that the master not starve a base algorithm that might perform uncompetitively initially but would eventually outperform others if given enough feedback. We address this difficulty by devising a version of Online Mirror Descent with a special mirror map together with a sophisticated learning rate scheme. We show that this approach manages to achieve a more delicate balance between exploiting and exploring base algorithms than previous works yielding superior regret bounds. Our results are applicable to many settings, such as multi-armed bandits, contextual bandits, and convex bandits. As examples, we present two main applications. The first is to create an algorithm that enjoys worst-case robustness while at the same time performing much better when the environment is relatively easy. The second is to create an algorithm that works simultaneously under different assumptions of the environment, such as different priors or different loss structures.",
"title": ""
}
] |
scidocsrr
|
3f3ec98b8e4ff1821de93ea3027d6f62
|
Image Dehazing using Bilinear Composition Loss Function
|
[
{
"docid": "25f39a66710db781f4354f0da5974d61",
"text": "With the rapid development of economy in China over the past decade, air pollution has become an increasingly serious problem in major cities and caused grave public health concerns in China. Recently, a number of studies have dealt with air quality and air pollution. Among them, some attempt to predict and monitor the air quality from different sources of information, ranging from deployed physical sensors to social media. These methods are either too expensive or unreliable, prompting us to search for a novel and effective way to sense the air quality. In this study, we propose to employ the state of the art in computer vision techniques to analyze photos that can be easily acquired from online social media. Next, we establish the correlation between the haze level computed directly from photos with the official PM 2.5 record of the taken city at the taken time. Our experiments based on both synthetic and real photos have shown the promise of this image-based approach to estimating and monitoring air pollution.",
"title": ""
},
{
"docid": "8a77882cfe06eaa88db529432ed31b0c",
"text": "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.",
"title": ""
}
] |
[
{
"docid": "aa32bff910ce6c7b438dc709b28eefe3",
"text": "Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for a e-mail: [email protected] 482 The European Physical Journal Special Topics urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science",
"title": ""
},
{
"docid": "8199f1a48f81f18157fa5ecd80a29224",
"text": "Visual Question Answering (VQA) is a relatively new task, which tries to infer answer sentences for an input image coupled with a corresponding question. Instead of dynamically generating answers, they are usually inferred by finding the most probable answer from a fixed set of possible answers. Previous work did not address the problem of finding all possible answers, but only modeled the answering part of VQA as a classification task. To tackle this problem, we infer answer sentences by using a Long Short-Term Memory (LSTM) network that allows us to dynamically generate answers for (image, question) pairs. In a series of experiments, we discover an end-to-end Deep Neural Network structure, which allows us to dynamically answer questions referring to a given input image by using an LSTM decoder network. With this approach, we are able to generate both less common answers, which are not considered by classification models, and more complex answers with the appearance of datasets containing answers that consist of more than three words.",
"title": ""
},
{
"docid": "cdef5f6a50c1f427e8f37be3c6ebbccf",
"text": "In this article, we summarize the 5G mobile communication requirements and challenges. First, essential requirements for 5G are pointed out, including higher traffic volume, indoor or hotspot traffic, and spectrum, energy, and cost efficiency. Along with these changes of requirements, we present a potential step change for the evolution toward 5G, which shows that macro-local coexisting and coordinating paths will replace one macro-dominated path as in 4G and before. We hereafter discuss emerging technologies for 5G within international mobile telecommunications. Challenges and directions in hardware, including integrated circuits and passive components, are also discussed. Finally, a whole picture for the evolution to 5G is predicted and presented.",
"title": ""
},
{
"docid": "f3c38f45d58a3252d0b052848add4617",
"text": "In this paper, we present theoretical analysis of SON – a convex optimization procedure for clustering using a sum-of-norms (SON) regularization recently proposed in [8, 10, 11, 17]. In particular, we show if the samples are drawn from two cubes, each being one cluster, then SON can provably identify the cluster membership provided that the distance between the two cubes is larger than a threshold which (linearly) depends on the size of the cube and the ratio of numbers of samples in each cluster. To the best of our knowledge, this paper is the first to provide a rigorous analysis to understand why and when SON works. We believe this may provide important insights to develop novel convex optimization based algorithms for clustering.",
"title": ""
},
{
"docid": "17fd8358fd478385dfceb2090f4243ad",
"text": "The use of robotics in rehabilitation area provides a quantifying outcomes on treatment of after stroke patients. This paper presents the preliminary design of a novel exoskeleton robot, called NU-Wrist, for human wrist and forearm rehabilitation. The proposed robot design provides rotation within the anatomical range of human wrist and forearm motions. A novel compliant robot handle link ensures dynamic passive self-alignment of human-robot axes during therapy exercising. The proof-of-concept wrist robot prototype has been manufactured using 3D printing technology for experimental design evaluation. It is shown the proposed NU-Wrist robot design is satisfied to the specified rehabilitation system requirements.",
"title": ""
},
{
"docid": "a8f5f7c147c1ac8cabf86d4809aa3f65",
"text": "Structural gene rearrangements resulting in gene fusions are frequent events in solid tumours. The identification of certain activating fusions can aid in the diagnosis and effective treatment of patients with tumours harbouring these alterations. Advances in the techniques used to identify fusions have enabled physicians to detect these alterations in the clinic. Targeted therapies directed at constitutively activated oncogenic tyrosine kinases have proven remarkably effective against cancers with fusions involving ALK, ROS1, or PDGFB, and the efficacy of this approach continues to be explored in malignancies with RET, NTRK1/2/3, FGFR1/2/3, and BRAF/CRAF fusions. Nevertheless, prolonged treatment with such tyrosine-kinase inhibitors (TKIs) leads to the development of acquired resistance to therapy. This resistance can be mediated by mutations that alter drug binding, or by the activation of bypass pathways. Second-generation and third-generation TKIs have been developed to overcome resistance, and have variable levels of activity against tumours harbouring individual mutations that confer resistance to first-generation TKIs. The rational sequential administration of different inhibitors is emerging as a new treatment paradigm for patients with tumours that retain continued dependency on the downstream kinase of interest.",
"title": ""
},
{
"docid": "f194075ba0a5cf69d9bba9e127ed29bb",
"text": "Let's start from scratch in thinking about what memory is for, and consequently, how it works. Suppose that memory and conceptualization work in the service of perception and action. In this case, conceptualization is the encoding of patterns of possible physical interaction with a three-dimensional world. These patterns are constrained by the structure of the environment, the structure of our bodies, and memory. Thus, how we perceive and conceive of the environment is determined by the types of bodies we have. Such a memory would not have associations. Instead, how concepts become related (and what it means to be related) is determined by how separate patterns of actions can be combined given the constraints of our bodies. I call this combination \"mesh.\" To avoid hallucination, conceptualization would normally be driven by the environment, and patterns of action from memory would play a supporting, but automatic, role. A significant human skill is learning to suppress the overriding contribution of the environment to conceptualization, thereby allowing memory to guide conceptualization. The effort used in suppressing input from the environment pays off by allowing prediction, recollective memory, and language comprehension. I review theoretical work in cognitive science and empirical work in memory and language comprehension that suggest that it may be possible to investigate connections between topics as disparate as infantile amnesia and mental-model theory.",
"title": ""
},
{
"docid": "8d1797caf78004e6ba548ace7d5a1161",
"text": "An automated irrigation system was developed to optimize water use for agricultural crops. The system has a distributed wireless network of soil-moisture and temperature sensors placed in the root zone of the plants. In addition, a gateway unit handles sensor information, triggers actuators, and transmits data to a web application. An algorithm was developed with threshold values of temperature and soil moisture that was programmed into a microcontroller-based gateway to control water quantity. The system was powered by photovoltaic panels and had a duplex communication link based on a cellular-Internet interface that allowed for data inspection and irrigation scheduling to be programmed through a web page. The automated system was tested in a sage crop field for 136 days and water savings of up to 90% compared with traditional irrigation practices of the agricultural zone were achieved. Three replicas of the automated system have been used successfully in other places for 18 months. Because of its energy autonomy and low cost, the system has the potential to be useful in water limited geographically isolated areas.",
"title": ""
},
{
"docid": "90224ff86d94c82e5d9b5bc8164fcc2e",
"text": "Reading Comprehension (RC) of text is one of the fundamental tasks in natural language processing. In recent years, several end-to-end neural network models have been proposed to solve RC tasks. However, most of these models suffer in reasoning over long documents. In this work, we propose a novel Memory Augmented Machine Comprehension Network (MAMCN) to address long-range dependencies present in machine reading comprehension. We perform extensive experiments to evaluate proposed method with the renowned benchmark datasets such as SQuAD, QUASAR-T, and TriviaQA. We achieve the state of the art performance on both the document-level (QUASAR-T, TriviaQA) and paragraph-level (SQuAD) datasets compared to all the previously published approaches.",
"title": ""
},
{
"docid": "226a1a6bd37f75b5dfbb7655ec859f25",
"text": "Glenn N. Levine, MD, FAHA, Chair, Anthony V. D’Amico, MD, PhD, Peter Berger, MD, FAHA, Peter E. Clark, MD, Robert H. Eckel, MD, FAHA, Nancy L. Keating, MD, MPH, Richard V. Milani, MD, FAHA, Arthur I. Sagalowsky, MD, Matthew R. Smith, MD, PhD, Neil Zakai, MD on behalf of the American Heart Association Council on Clinical Cardiology and Council on Epidemiology and Prevention, the American Cancer Society, and the American Urological Association",
"title": ""
},
{
"docid": "8bd9e3fe5d2b6fe8d58a86baf3de3522",
"text": "Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as Pose guided structured Region Ensemble Network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms.",
"title": ""
},
{
"docid": "771834bc4bfe8231fe0158ec43948bae",
"text": "Semantic image segmentation has recently witnessed considerable progress by training deep convolutional neural networks (CNNs). The core issue of this technique is the limited capacity of CNNs to depict visual objects. Existing approaches tend to utilize approximate inference in a discrete domain or additional aides and do not have a global optimum guarantee. We propose the use of the multi-label manifold ranking (MR) method in solving the linear objective energy function in a continuous domain to delineate visual objects and solve these problems. We present a novel embedded single stream optimization method based on the MR model to avoid approximations without sacrificing expressive power. In addition, we propose a novel network, which we refer to as dual multi-scale manifold ranking (DMSMR) network, that combines the dilated, multi-scale strategies with the single stream MR optimization method in the deep learning architecture to further improve the performance. Experiments on high resolution images, including close-range and remote sensing datasets, demonstrate that the proposed approach can achieve competitive accuracy without additional aides in an end-to-end manner.",
"title": ""
},
{
"docid": "aad2d6385cb8c698a521caea00fe56d2",
"text": "With respect to the \" influence on the development and practice of science and engineering in the 20th century \" , Krylov space methods are considered as one of the ten most important classes of numerical methods [1]. Large sparse linear systems of equations or large sparse matrix eigenvalue problems appear in most applications of scientific computing. Sparsity means that most elements of the matrix involved are zero. In particular, discretization of PDEs with the finite element method (FEM) or with the finite difference method (FDM) leads to such problems. In case the original problem is nonlinear, linearization by Newton's method or a Newton-type method leads again to a linear problem. We will treat here systems of equations only, but many of the numerical methods for large eigenvalue problems are based on similar ideas as the related solvers for equations. Sparse linear systems of equations can be solved by either so-called sparse direct solvers, which are clever variations of Gauss elimination, or by iterative methods. In the last thirty years, sparse direct solvers have been tuned to perfection: on the one hand by finding strategies for permuting equations and unknowns to guarantee a stable LU decomposition and small fill-in in the triangular factors, and on the other hand by organizing the computation so that optimal use is made of the hardware, which nowadays often consists of parallel computers whose architecture favors block operations with data that are locally stored or cached. The iterative methods that are today applied for solving large-scale linear systems are mostly preconditioned Krylov (sub)space solvers. Classical methods that do not belong to this class, like the successive overrelaxation (SOR) method, are no longer competitive. However, some of the classical matrix splittings, e.g. the one of SSOR (the symmetric version of SOR), are still used for preconditioning. Multigrid is in theory a very effective iterative method, but normally it is now applied as an inner iteration with a Krylov space solver as outer iteration; then, it can also be considered as a preconditioner. In the past, Krylov space solvers were referred to also by other names such as semi-iterative methods and polynomial acceleration methods. Some",
"title": ""
},
{
"docid": "061c67c967818b1a0ad8da55345c6dcf",
"text": "The paper aims at revealing the essence and connotation of Computational Thinking. It analyzed some of the international academia’s research results of Computational Thinking. The author thinks Computational Thinking is discipline thinking or computing philosophy, and it is very critical to understand Computational Thinking to grasp the thinking’ s computational features and the computing’s thinking attributes. He presents the basic rules of screening the representative terms of Computational Thinking and lists some representative terms based on the rules. He thinks Computational Thinking is contained in the commonalities of those terms. The typical thoughts of Computational Thinking are structuralization, formalization, association-and-interaction, optimization and reuse-and-sharing. Training Computational Thinking must base on the representative terms and the typical thoughts. There are three innovations in the paper: the five rules of screening the representative terms, the five typical thoughts and the formalized description of Computational Thinking.",
"title": ""
},
{
"docid": "0f10bb2afc1797fad603d8c571058ecb",
"text": "This paper presents findings from the All Wales Hate Crime Project. Most hate crime research has focused on discrete victim types in isolation. For the first time, internationally, this paper examines the psychological and physical impacts of hate crime across seven victim types drawing on quantitative and qualitative data. It contributes to the hate crime debate in two significant ways: (1) it provides the first look at the problem in Wales and (2) it provides the first multi-victim-type analysis of hate crime, showing that impacts are not homogenous across victim groups. The paper provides empirical credibility to the impacts felt by hate crime victims on the margins who have routinely struggled to gain support.",
"title": ""
},
{
"docid": "241f33036b6b60e826da63d2b95dddac",
"text": "Technology changes have been acknowledged as a critical factor in determining competitiveness of organization. Under such environment, the right anticipation of technology change has been of huge importance in strategic planning. To monitor technology change, technology forecasting (TF) is frequently utilized. In academic perspective, TF has received great attention for a long time. However, few researches have been conducted to provide overview of the TF literature. Even though some studies deals with review of TF research, they generally focused on type and characteristics of various TF, so hardly provides information about patterns of TF research and which TF method is used in certain technology industry. Accordingly, this study profile developments in and patterns of scholarly research in TF over time. Also, this study investigates which technology industries have used certain TF method and identifies their relationships. This study will help in understanding TF research trend and their application area. Keywords—Technology forecasting, technology industry, TF trend, technology trajectory.",
"title": ""
},
{
"docid": "ad50525ba815295122d34f8008dea9ab",
"text": "Real-time scheduling algorithms like RMA or EDF and their corresponding schedulability test have proven to be powerful tools for developing predictable real-time systems. However, the traditional interrupt management model presents multiple inconsistencies that break the assumptions of many of the real-time scheduling tests, diminishing its utility. In this article, we analyze these inconsistencies and present a model that resolves them by integrating interrupts and tasks in a single scheduling model. We then use the RMA theory to calculate the cost of the model and analyze the circumstances under which it can provide the most value. This model was implemented in a kernel module. The portability of the design of our module is discussed in terms of its independence from both the hardware and the kernel. We also discuss the implementation issues of the model over conventional PC hardware, along with its cost and novel optimizations for reducing the overhead. Finally, we present our experimental evaluation to show evidence of its temporal determinism and overhead.",
"title": ""
},
{
"docid": "b40bbfc19072efc645e5f1d6fb1d89e7",
"text": "With the development of information technologies, a great amount of semantic data is being generated on the web. Consequently, finding efficient ways of accessing this data becomes more and more important. Question answering is a good compromise between intuitiveness and expressivity, which has attracted the attention of researchers from different communities. In this paper, we propose an intelligent questing answering system for answering questions about concepts. It is based on ConceptRDF, which is an RDF presentation of the ConceptNet knowledge base. We use it as a knowledge base for answering questions. Our experimental results show that our approach is promising: it can answer questions about concepts at a satisfactory level of accuracy (reaches 94.5%).",
"title": ""
},
{
"docid": "c7b9c324171d40cec24ed089933a06ce",
"text": "With the proliferation of the internet and increased global access to online media, cybercrime is also occurring at an increasing rate. Currently, both personal users and companies are vulnerable to cybercrime. A number of tools including firewalls and Intrusion Detection Systems (IDS) can be used as defense mechanisms. A firewall acts as a checkpoint which allows packets to pass through according to predetermined conditions. In extreme cases, it may even disconnect all network traffic. An IDS, on the other hand, automates the monitoring process in computer networks. The streaming nature of data in computer networks poses a significant challenge in building IDS. In this paper, a method is proposed to overcome this problem by performing online classification on datasets. In doing so, an incremental naive Bayesian classifier is employed. Furthermore, active learning enables solving the problem using a small set of labeled data points which are often very expensive to acquire. The proposed method includes two groups of actions i.e. offline and online. The former involves data preprocessing while the latter introduces the NADAL online method. The proposed method is compared to the incremental naive Bayesian classifier using the NSL-KDD standard dataset. There are three advantages with the proposed method: (1) overcoming the streaming data challenge; (2) reducing the high cost associated with instance labeling; and (3) improved accuracy and Kappa compared to the incremental naive Bayesian approach. Thus, the method is well-suited to IDS applications.",
"title": ""
}
] |
scidocsrr
|
9d992e12bc50a204f6f17ba9792c600e
|
Reinforcement Learning For Automated Trading
|
[
{
"docid": "be692c1251cb1dc73b06951c54037701",
"text": "Can we train the computer to beat experienced traders for financial assert trading? In this paper, we try to address this challenge by introducing a recurrent deep neural network (NN) for real-time financial signal representation and trading. Our model is inspired by two biological-related learning concepts of deep learning (DL) and reinforcement learning (RL). In the framework, the DL part automatically senses the dynamic market condition for informative feature learning. Then, the RL module interacts with deep representations and makes trading decisions to accumulate the ultimate rewards in an unknown environment. The learning system is implemented in a complex NN that exhibits both the deep and recurrent structures. Hence, we propose a task-aware backpropagation through time method to cope with the gradient vanishing issue in deep training. The robustness of the neural system is verified on both the stock and the commodity future markets under broad testing conditions.",
"title": ""
}
] |
[
{
"docid": "7247eb6b90d23e2421c0d2500359d247",
"text": "The large-scale collection and exploitation of personal information to drive targeted online advertisements has raised privacy concerns. As a step towards understanding these concerns, we study the relationship between how much information is collected and how valuable it is for advertising. We use HTTP traces consisting of millions of users to aid our study and also present the first comparative study between aggregators. We develop a simple model that captures the various parameters of today's advertising revenues, whose values are estimated via the traces. Our results show that per aggregator revenue is skewed (5% accounting for 90% of revenues), while the contribution of users to advertising revenue is much less skewed (20% accounting for 80% of revenue). Google is dominant in terms of revenue and reach (presence on 80% of publishers). We also show that if all 5% of the top users in terms of revenue were to install privacy protection, with no corresponding reaction from the publishers, then the revenue can drop by 30%.",
"title": ""
},
{
"docid": "4d6082ab565b98ea6aa88a68ba781fca",
"text": "Over the past decade, deep learning has achieved remarkable success in various artificial intelligence research areas. Evolved from the previous research on artificial neural networks, this technology has shown superior performance to other machine learning algorithms in areas such as image and voice recognition, natural language processing, among others. The first wave of applications of deep learning in pharmaceutical research has emerged in recent years, and its utility has gone beyond bioactivity predictions and has shown promise in addressing diverse problems in drug discovery. Examples will be discussed covering bioactivity prediction, de novo molecular design, synthesis prediction and biological image analysis.",
"title": ""
},
{
"docid": "3f36b23dd997649b8df6c7fa7fb73963",
"text": "This paper presents a virtual impedance design and implementation approach for power electronics interfaced distributed generation (DG) units. To improve system stability and prevent power couplings, the virtual impedances can be placed between interfacing converter outputs and the main grid. However, optimal design of the impedance value, robust implementation of the virtual impedance, and proper utilization of the virtual impedance for DG performance enhancement are key for the virtual impedance concept. In this paper, flexible small-signal models of microgrids in different operation modes are developed first. Based on the developed microgrid models, the desired DG impedance range is determined considering the stability, transient response, and power flow performance of DG units. A robust virtual impedance implementation method is also presented, which can alleviate voltage distortion problems caused by harmonic loads compared to the effects of physical impedances. Furthermore, an adaptive impedance concept is proposed to further improve power control performances during the transient and grid faults. Simulation and experimental results are provided to validate the impedance design approach, the virtual impedance implementation method, and the proposed adaptive transient impedance control strategies.",
"title": ""
},
{
"docid": "26e79793addc4750dcacc0408764d1e1",
"text": "It has been shown that integration of acoustic and visual information especially in noisy conditions yields improved speech recognition results. This raises the question of how to weight the two modalities in different noise conditions. Throughout this paper we develop a weighting process adaptive to various background noise situations. In the presented recognition system, audio and video data are combined following a Separate Integration (SI) architecture. A hybrid Artificial Neural Network/Hidden Markov Model (ANN/HMM) system is used for the experiments. The neural networks were in all cases trained on clean data. Firstly, we evaluate the performance of different weighting schemes in a manually controlled recognition task with different types of noise. Next, we compare different criteria to estimate the reliability of the audio stream. Based on this, a mapping between the measurements and the free parameter of the fusion process is derived and its applicability is demonstrated. Finally, the possibilities and limitations of adaptive weighting are compared and discussed.",
"title": ""
},
{
"docid": "9a136517edbfce2a7c6b302da9e6c5b7",
"text": "This paper presents our approach to semantic relatedness and textual entailment subtasks organized as task 1 in SemEval 2014. Specifically, we address two questions: (1) Can we solve these two subtasks together? (2) Are features proposed for textual entailment task still effective for semantic relatedness task? To address them, we extracted seven types of features including text difference measures proposed in entailment judgement subtask, as well as common text similarity measures used in both subtasks. Then we exploited the same feature set to solve the both subtasks by considering them as a regression and a classification task respectively and performed a study of influence of different features. We achieved the first and the second rank for relatedness and entailment task respectively.",
"title": ""
},
{
"docid": "67e35bc7add5d6482fff4cd4f2060e6b",
"text": "There is a clear trend in the automotive industry to use more electrical systems in order to satisfy the ever-growing vehicular load demands. Thus, it is imperative that automotive electrical power systems will obviously undergo a drastic change in the next 10-20 years. Currently, the situation in the automotive industry is such that the demands for higher fuel economy and more electric power are driving advanced vehicular power system voltages to higher levels. For example, the projected increase in total power demand is estimated to be about three to four times that of the current value. This means that the total future power demand of a typical advanced vehicle could roughly reach a value as high as 10 kW. In order to satisfy this huge vehicular load, the approach is to integrate power electronics intensive solutions within advanced vehicular power systems. In view of this fact, this paper aims at reviewing the present situation as well as projected future research and development work of advanced vehicular electrical power systems including those of electric, hybrid electric, and fuel cell vehicles (EVs, HEVs, and FCVs). The paper will first introduce the proposed power system architectures for HEVs and FCVs and will then go on to exhaustively discuss the specific applications of dc/dc and dc/ac power electronic converters in advanced automotive power systems",
"title": ""
},
{
"docid": "f8b487342f4eaa4931f4a65cbc420b89",
"text": "Existing sequence prediction methods are mostly concerned with time-independent sequences, in which the actual time span between events is irrelevant and the distance between events is simply the difference between their order positions in the sequence. While this time-independent view of sequences is applicable for data such as natural languages, e.g., dealing with words in a sentence, it is inappropriate and inefficient for many real world events that are observed and collected at unequally spaced points of time as they naturally arise, e.g., when a person goes to a grocery store or makes a phone call. The time span between events can carry important information about the sequence dependence of human behaviors. In this work, we propose a set of methods for using time in sequence prediction. Because neural sequence models such as RNN are more amenable for handling token-like input, we propose two methods for time-dependent event representation, based on the intuition on how time is tokenized in everyday life and previous work on embedding contextualization. We also introduce two methods for using next event duration as regularization for training a sequence prediction model. We discuss these methods based on recurrent neural nets. We evaluate these methods as well as baseline models on five datasets that resemble a variety of sequence prediction tasks. The experiments revealed that the proposed methods offer accuracy gain over baseline models in a range of settings.",
"title": ""
},
{
"docid": "a73c24a7db1694ac9c260fc756cb5682",
"text": "Researchers are developing mobile sensing platforms to facilitate public awareness of environmental conditions. However, turning such awareness into practical community action and political change requires more than just collecting and presenting data. To inform research on mobile environmental sensing, we conducted design fieldwork with government, private, and public interest stakeholders. In parallel, we built an environmental air quality sensing system and deployed it on street sweeping vehicles in a major U.S. city; this served as a research vehicle\"by grounding our interviews and affording us status as environmental action researchers. In this paper, we present a qualitative analysis of the landscape of environmental action, focusing on insights that will help researchers frame meaningful technological interventions.",
"title": ""
},
{
"docid": "c9c4ed4a7e8e6ef8ca2bcf146001d2e5",
"text": "Microblogging services such as Twitter are said to have the potential for increasing political participation. Given the feature of 'retweeting' as a simple yet powerful mechanism for information diffusion, Twitter is an ideal platform for users to spread not only information in general but also political opinions through their networks as Twitter may also be used to publicly agree with, as well as to reinforce, someone's political opinions or thoughts. Besides their content and intended use, Twitter messages ('tweets') also often convey pertinent information about their author's sentiment. In this paper, we seek to examine whether sentiment occurring in politically relevant tweets has an effect on their retweetability (i.e., how often these tweets will be retweeted). Based on a data set of 64,431 political tweets, we find a positive relationship between the quantity of words indicating affective dimensions, including positive and negative emotions associated with certain political parties or politicians, in a tweet and its retweet rate. Furthermore, we investigate how political discussions take place in the Twitter network during periods of political elections with a focus on the most active and most influential users. Finally, we conclude by discussing the implications of our results.",
"title": ""
},
{
"docid": "f698b77df48a5fac4df7ba81b4444dd5",
"text": "Discontinuous-conduction mode (DCM) operation is usually employed in DC-DC converters for small inductor on printed circuit board (PCB) and high efficiency at light load. However, it is normally difficult for synchronous converter to realize the DCM operation, especially in high frequency applications, which requires a high speed and high precision comparator to detect the zero crossing point at cost of extra power losses. In this paper, a novel zero current detector (ZCD) circuit with an adaptive delay control loop for high frequency synchronous buck converter is presented. Compared to the conventional ZCD, proposed technique is proven to offer 8.5% efficiency enhancement when performed in a buck converter at the switching frequency of 4MHz and showed less sensitivity to the transistor mismatch of the sensor circuit.",
"title": ""
},
{
"docid": "e0fd648da901ed99ddbed3457bc83cfe",
"text": "This clinical trial assessed the ability of Gluma Dentin Bond to inhibit dentinal sensitivity in teeth prepared to receive complete cast restorations. Twenty patients provided 76 teeth for the study. Following tooth preparation, dentinal surfaces were coated with either sterile water (control) or two 30-second applications of Gluma Dentin Bond (test) on either intact or removed smear layers. Patients were recalled after 14 days for a test of sensitivity of the prepared dentin to compressed air, osmotic stimulus (saturated CaCl2 solution), and tactile stimulation via a scratch test under controlled loads. A significantly lower number of teeth responded to the test stimuli for both Gluma groups when compared to the controls (P less than .01). No difference was noted between teeth with smear layers intact or removed prior to treatment with Gluma.",
"title": ""
},
{
"docid": "00e06f34117dc96ec6f7a5fba47b3f5f",
"text": "This paper presents a new algorithm for downloading big files from multiple sources in peer-to-peer networks. The algorithm is compelling with the simplicity of its implementation and the novel properties it offers. It ensures low hand-shaking cost between peers who intend to download a file (or parts of a file) from each other. Furthermore, it achieves maximal file availability, meaning that any two peers with partial knowledge of a given file will almost always be able to fully benefit from each other’s knowledge– i.e., overlapping knowledge will rarely occur. Our algorithm is made possible by the recent introduction of linear-time rateless erasure codes.",
"title": ""
},
{
"docid": "0c526d2684665bbdbefefd78cf9a05dd",
"text": "Road recognition from video sequences has been solved robustly only for small, often simplified subsets of possible road configurations. A massive augmentation of the amount of prior knowledge may pave the way towards a generation of estimators of more general applicability. This contribution introduces Description Logic extended by rules as a promising knowledge representation formalism for scene understanding. A Description Logic knowledge base for arbitrary road and intersection geometries and configurations is set up. Logically stated geometric constraints and road building regulations constrain the hypothesis space. Sensor data from an in-vehicle vision sensor and from a digital map provide evidence for a particular intersection. Partial observability and different abstraction layers of the input data are naturally handled. Deductive inference services – namely classification, entailment, satisfiability and consistency – are then used to narrow down the intersection hypothesis space based on the evidence and the background knowledge, and to retrieve intersection information relevant to a user, i.e. a human or a driver assistance system. The paper concludes with an outlook towards non-deductive inference, namely model construction, and probabilistic inference.",
"title": ""
},
{
"docid": "63d2a703557246e33acff872efbe80a1",
"text": "We propose a stochastic gradient Markov chain Monte Carlo (SG-MCMC) algorithm for scalable inference in mixed-membership stochastic blockmodels (MMSB). Our algorithm is based on the stochastic gradient Riemannian Langevin sampler and achieves both faster speed and higher accuracy at every iteration than the current state-of-the-art algorithm based on stochastic variational inference. In addition we develop an approximation that can handle models that entertain a very large number of communities. The experimental results show that SG-MCMC strictly dominates competing algorithms in all cases.",
"title": ""
},
{
"docid": "4c0c6373c40bd42417fa2890fc80986b",
"text": "Regularized inversion methods for image reconstruction are used widely due to their tractability and their ability to combine complex physical sensor models with useful regularity criteria. Such methods were used in the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem severely limits both the expressiveness of possible regularity conditions and the variety of provably convergent Plug-and-Play denoising operators. In this paper, we introduce the concept of consensus equilibrium (CE), which generalizes regularized inversion to include a much wider variety of regularity operators without the need for an optimization formulation. Consensus equilibrium is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized inversion is replaced by the problem of solving these equilibrium equations, which can be approached in multiple ways, including as a fixed point problem that generalizes the ADMM approach used in the Plug-and-Play method. We present the Douglas-Rachford (DR) algorithm for computing the CE solution as a fixed point and prove the convergence of this algorithm under conditions that include denoising operators that do not arise from optimization problems and that may not be nonexpansive. We give several examples to illustrate the idea of consensus equilibrium and the convergence properties of the DR algorithm and demonstrate this method on a sparse interpolation problem using electron microscopy data.",
"title": ""
},
{
"docid": "a9c4f01cfdbdde6245d99a9c5056f83f",
"text": "Brachyolmia (BO) is a heterogeneous group of skeletal dysplasias with skeletal changes limited to the spine or with minimal extraspinal features. BO is currently classified into types 1, 2, 3, and 4. BO types 1 and 4 are autosomal recessive conditions caused by PAPSS2 mutations, which may be merged together as an autosomal recessive BO (AR-BO). The clinical and radiological signs of AR-BO in late childhood have already been reported; however, the early manifestations and their age-dependent evolution have not been well documented. We report an affected boy with AR-BO, whose skeletal abnormalities were detected in utero and who was followed until 10 years of age. Prenatal ultrasound showed bowing of the legs. In infancy, radiographs showed moderate platyspondyly and dumbbell deformity of the tubular bones. Gradually, the platyspondyly became more pronounced, while the bowing of the legs and dumbbell deformities of the tubular bones diminished with age. In late childhood, the overall findings were consistent with known features of AR-BO. Genetic testing confirmed the diagnosis. Being aware of the initial skeletal changes may facilitate early diagnosis of PAPSS2-related skeletal dysplasias.",
"title": ""
},
{
"docid": "5d6cb3669a277e0aed4f75506f158dd5",
"text": "The following sections will apply the foregoing induction systems to three specific types of problems, and discuss the “reasonableness” of the results obtained. Section 4.1 deals with the Bernoulli sequence. The predictions obtained are identical to those given by “Laplace’s Rule of Succession.” A particularly important technique is used to code the original sequence into a set of integers which constitute its “descriptions” for the problems of Sections 4.2 and 4.3. Section 4.2 deals with the extrapolation of a sequence in which there are certain kinds of intersymbol constraints. Codes for such sequences are devised by defining special symbols for subsequences whose frequencies are unusually high or low. Some properties of this coding method are discussed, and they are found to be intuitively reasonable. A preliminary computer program has been written for induction using this coding method. However, there are some important simplifications used in the program, and it is uncertain as to whether it can make useful predictions. Section 4.3 describes the use of phrase structure grammars for induction. A formal solution is presented and although the resultant analysis indicates that this model conforms to some extent to intuitive expectations, the author feels that it still has at least one serious shortcoming in that it has no good means",
"title": ""
},
{
"docid": "be28e8967a316c1e5748e131e17950ba",
"text": "Walking is the most natural form of locomotion for humans, and real walking interfaces have demonstrated their benefits for several navigation tasks. With recently proposed redirection techniques it becomes possible to overcome space limitations as imposed by tracking sensors or laboratory setups, and, theoretically, it is now possible to walk through arbitrarily large virtual environments. However, walking as sole locomotion technique has drawbacks, in particular, for long distances, such that even in the real world we tend to support walking with passive or active transportation for longer-distance travel. In this article we show that concepts from the field of redirected walking can be applied to movements with transportation devices. We conducted psychophysical experiments to determine perceptual detection thresholds for redirected driving, and set these in relation to results from redirected walking. We show that redirected walking-and-driving approaches can easily be realized in immersive virtual reality laboratories, e. g., with electric wheelchairs, and show that such systems can combine advantages of real walking in confined spaces with benefits of using vehiclebased self-motion for longer-distance travel.",
"title": ""
},
{
"docid": "196ec106352cb2c48ae81dcc4b989bbf",
"text": "This work discusses the way people have used plants over time (basically since Ancient Egypt) to care for their physical aspect, and also how natural resources (especially plants) are currently used in personal-care products. Many plant species are ancient. This paper also shows examples of plants used for personal care which are investigated with new scientific advances.",
"title": ""
},
{
"docid": "b8bb4d195738e815430d146ac110df49",
"text": "Software testing is an effective way to find software errors. Generating a good test suite is the key. A program invariant is a property that is true at a particular program point or points. The property could reflect the program’s execution over a test suite. Based on this point, we integrate the random test case generation technique and the invariant extraction technique, achieving automatic test case generation and selection. With the same invariants, compared with the traditional random test case generation technique, the experimental results show that the approach this paper describes can generate a smaller test suite. Keywords-software testing; random testing; test case; program invariant",
"title": ""
}
] |
scidocsrr
|
b9d78f66116a502c50c8b5b0f6e8ef6a
|
A Systematic Review of Cognitive Behavioral Therapy and Behavioral Activation Apps for Depression.
|
[
{
"docid": "f66854fd8e3f29ae8de75fc83d6e41f5",
"text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.",
"title": ""
}
] |
[
{
"docid": "e6ff5af0a9d6105a60771a2c447fab5e",
"text": "Object detection and classification in 3D is a key task in Automated Driving (AD). LiDAR sensors are employed to provide the 3D point cloud reconstruction of the surrounding environment, while the task of 3D object bounding box detection in real time remains a strong algorithmic challenge. In this paper, we build on the success of the oneshot regression meta-architecture in the 2D perspective image space and extend it to generate oriented 3D object bounding boxes from LiDAR point cloud. Our main contribution is in extending the loss function of YOLO v2 to include the yaw angle, the 3D box center in Cartesian coordinates and the height of the box as a direct regression problem. This formulation enables real-time performance, which is essential for automated driving. Our results are showing promising figures on KITTI benchmark, achieving real-time performance (40 fps) on Titan X GPU.",
"title": ""
},
{
"docid": "5b3ca1cc607d2e8f0394371f30d9e83a",
"text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.",
"title": ""
},
{
"docid": "d196a240fd8adf864ef191cbba1019a5",
"text": "The role of gender in shaping achievement motivation has a long history in psychological and educational research. In this review, gender differences in motivation are examined using four contemporary theories of achievement motivation, including attribution, expectancy-value, selfefficacy, and achievement goal perspectives. Across all theories, findings indicate girls’ and boys’ motivation-related beliefs and behaviors continue to follow gender role stereotypes. Boys report stronger ability and interest beliefs in mathematics and science, whereas girls have more confidence and interest in language arts and writing. Gender effects are moderated by ability, ethnicity, socioeconomic status, and classroom context. Additionally, developmental research indicates that gender differences in motivation are evident early in school, and increase for reading and language arts over the course of school. The role of the home and school environment in the development of these gender patterns is examined. Important implications for school professionals are highlighted. D 2006 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d3eeb9e96881dc3bd60433bdf3e89749",
"text": "The first € price and the £ and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for Germany, the €(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. M. Bushnell, V.D. Agrawal Essentials of Electronic Testing for Digital, Memory and MixedSignal VLSI Circuits",
"title": ""
},
{
"docid": "c1d5df0e2058e3f191a8227fca51a2fb",
"text": "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.",
"title": ""
},
{
"docid": "deaac85f67d188abf8a764ee3dd20b58",
"text": "Automotive painting shops consume electricity and natural gas to provide the required temperature and humidity for painting processes. The painting shop is not only responsible for a significant portion of energy consumption with automobile manufacturers, but also affects the quality of the product. Various storage devices play a crucial role in the management of multiple energy systems. It is thus of great practical interest to manage the storage devices together with other energy systems to provide the required environment with minimal cost. In this paper, we formulate the scheduling problem of these multiple energy systems as a Markov decision process (MDP) and then provide two approximate solution methods. Method 1 is dynamic programming with value function approximation. Method 2 is mixed integer programming with mean value approximation. The performance of the two methods is demonstrated on numerical examples. The results show that method 2 provides good solutions fast and with little performance degradation comparing with method 1. Then, we apply method 2 to optimize the capacity and to select the combination of the storage devices, and demonstrate the performance by numerical examples.",
"title": ""
},
{
"docid": "f7ce2995fc0369fb8198742a5f1fefa3",
"text": "In this paper, we present a novel method for multimodal gesture recognition based on neural networks. Our multi-stream recurrent neural network (MRNN) is a completely data-driven model that can be trained from end to end without domain-specific hand engineering. The MRNN extends recurrent neural networks with Long Short-Term Memory cells (LSTM-RNNs) that facilitate the handling of variable-length gestures. We propose a recurrent approach for fusing multiple temporal modalities using multiple streams of LSTM-RNNs. In addition, we propose alternative fusion architectures and empirically evaluate the performance and robustness of these fusion strategies. Experimental results demonstrate that the proposed MRNN outperforms other state-of-theart methods in the Sheffield Kinect Gesture (SKIG) dataset, and has significantly high robustness to noisy inputs.",
"title": ""
},
{
"docid": "80309e993643ab7d07afe6e987c6eb93",
"text": "OBJECTIVE\nIn later stages of type 2 diabetes, proinsulin and proinsulin-like molecules are secreted in increasing amounts with insulin. A recently introduced chemiluminescence assay is able to detect the uncleaved \"intact\" proinsulin and differentiate it from proinsulin-like molecules. This investigation explored the predictive value of intact proinsulin as an insulin resistance marker.\n\n\nRESEARCH DESIGN AND METHODS\nIn total, 48 patients with type 2 diabetes (20 women and 28 men, aged 60 +/- 9 years [means +/- SD], diabetes duration 5.1 +/- 3.8 years, BMI 31.2 +/- 4.8 kg/m2, and HbA1c 6.9 +/- 1.2%) were studied by means of an intravenous glucose tolerance test and determination of fasting values of intact proinsulin, insulin, resistin, adiponectin, and glucose. Insulin resistance was determined by means of minimal model analysis (MMA) (as the gold standard) and homeostatis model assessment (HOMA).\n\n\nRESULTS\nThere was a significant correlation between intact proinsulin values and insulin resistance (MMA P<0.05 and HOMA P<0.01). Elevation of intact proinsulin values above the reference range (>10 pmol/l) showed a very high specificity (MMA 100% and HOMA 92.9%) and a moderate sensitivity (MMA 48.6% and HOMA 47.1%) as marker for insulin resistance. Adiponectin values were slightly lower in the insulin resistant group, but no correlation to insulin resistance could be detected for resistin in the cross-sectional design.\n\n\nCONCLUSIONS\nElevated intact proinsulin seems to indicate an advanced stage of beta-cell exhaustion and is a highly specific marker for insulin resistance. It might be used as arbitrary marker for the therapeutic decision between secretagogue, sensitizer, or insulin therapy in type 2 diabetes.",
"title": ""
},
{
"docid": "f27c527dce75f1006ceff2b77d4e76b8",
"text": "Geckos are exceptional in their ability to climb rapidly up smooth vertical surfaces. Microscopy has shown that a gecko's foot has nearly five hundred thousand keratinous hairs or setae. Each 30–130 µm long seta is only one-tenth the diameter of a human hair and contains hundreds of projections terminating in 0.2–0.5 µm spatula-shaped structures. After nearly a century of anatomical description, here we report the first direct measurements of single setal force by using a two-dimensional micro-electro-mechanical systems force sensor and a wire as a force gauge. Measurements revealed that a seta is ten times more effective at adhesion than predicted from maximal estimates on whole animals. Adhesive force values support the hypothesis that individual seta operate by van der Waals forces. The gecko's peculiar behaviour of toe uncurling and peeling led us to discover two aspects of setal function which increase their effectiveness. A unique macroscopic orientation and preloading of the seta increased attachment force 600-fold above that of frictional measurements of the material. Suitably orientated setae reduced the forces necessary to peel the toe by simply detaching above a critical angle with the substratum.",
"title": ""
},
{
"docid": "6bdb8048915000b2d6c062e0e71b8417",
"text": "Depressive disorders are the most typical disease affecting many different factors of humanity. University students may be at increased risk of depression owing to the pressure and stress they encounter. Therefore, the purpose of this study is comparing the level of depression among male and female athletes and non-athletes undergraduate student of private university in Esfahan, Iran. The participants in this research are composed of 400 male and female athletes as well as no-athletes Iranian undergraduate students. The Beck depression test (BDI) was employed to measure the degree of depression. T-test was used to evaluate the distinction between athletes and non-athletes at P≤0.05. The ANOVA was conducted to examine whether there was a relationship between level of depression among non-athletes and athletes. The result showed that the prevalence rate of depression among non-athlete male undergraduate students is significantly higher than that of athlete male students. The results also presented that level of depression among female students is much more frequent compared to males. This can be due to the fatigue and lack of energy that are more frequent among female in comparison to the male students. Physical activity was negatively related to the level of depression by severity among male and female undergraduate students. However, there is no distinct relationship between physical activity and level of depression according to the age of athlete and nonathlete male and female undergraduate students. This study has essential implications for clinical psychology due to the relationship between physical activity and prevalence of depression.",
"title": ""
},
{
"docid": "2cc36985606c3d82b230165a8f025228",
"text": "This paper is aimed at designing a congestion control system that scales gracefully with network capacity, providing high utilization, low queueing delay, dynamic stability, and fairness among users. In earlier work we had developed fluid-level control laws that achieve the first three objectives for arbitrary networks and delays, but were forced to constrain the resource allocation policy. In this paper we extend the theory to include dynamics at TCP sources, preserving the earlier features at fast time-scales, but permitting sources to match their steady-state preferences, provided a bound on round-trip-times is known. We develop two packet-level implementations of this protocol, using (i) ECN marking, and (ii) queueing delay, as means of communicating the congestion measure from links to sources. We discuss parameter choices and demonstrate using ns-2 simulations the stability of the protocol and its equilibrium features in terms of utilization, queueing and fairness. We also demonstrate the scalability of these features to increases in capacity, delay, and load, in comparison with other deployed and proposed protocols.",
"title": ""
},
{
"docid": "138cd401515c3367428f88d4ef5d5cc7",
"text": "BACKGROUND\nThe present study was designed to implement an interprofessional simulation-based education program for nursing students and evaluate the influence of this program on nursing students' attitudes toward interprofessional education and knowledge about operating room nursing.\n\n\nMETHODS\nNursing students were randomly assigned to either the interprofessional simulation-based education or traditional course group. A before-and-after study of nursing students' attitudes toward the program was conducted using the Readiness for Interprofessional Learning Scale. Responses to an open-ended question were categorized using thematic content analysis. Nursing students' knowledge about operating room nursing was measured.\n\n\nRESULTS\nNursing students from the interprofessional simulation-based education group showed statistically different responses to four of the nineteen questions in the Readiness for Interprofessional Learning Scale, reflecting a more positive attitude toward interprofessional learning. This was also supported by thematic content analysis of the open-ended responses. Furthermore, nursing students in the simulation-based education group had a significant improvement in knowledge about operating room nursing.\n\n\nCONCLUSIONS\nThe integrated course with interprofessional education and simulation provided a positive impact on undergraduate nursing students' perceptions toward interprofessional learning and knowledge about operating room nursing. Our study demonstrated that this course may be a valuable elective option for undergraduate nursing students in operating room nursing education.",
"title": ""
},
{
"docid": "bec4932c66f8a8a87c1967ca42ad4315",
"text": "Nowadays, the number of layers and of neurons in each layer of a deep network are typically set manually. While very deep and wide networks have proven effective in general, they come at a high memory and computation cost, thus making them impractical for constrained platforms. These networks, however, are known to have many redundant parameters, and could thus, in principle, be replaced by more compact architectures. In this paper, we introduce an approach to automatically determining the number of neurons in each layer of a deep network during learning. To this end, we propose to make use of a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron. Starting from an overcomplete network, we show that our approach can reduce the number of parameters by up to 80% while retaining or even improving the network accuracy.",
"title": ""
},
{
"docid": "d14aa8618cab54d61750bb9ca0fc3d12",
"text": "Photo-realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods for advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state-of-the-art in this field, and presents a categorization and comparison of current methods. Our in-depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.",
"title": ""
},
{
"docid": "dba13216a4ab1bdf413ce23f2919ba0a",
"text": "Total scalp avulsion is an intractable problem in clinical practice. It often occurs in female adults, but rarely in children. In this article, we described the replantation of 2 scalp segments in a 4-year-old girl. The right segment involved the hairy scalp merely, whereas the left one involved the hairy scalp, forehead, eyelid, ear, and part of the face. Warm ischemia time was about 28 hours, and operation time was 13 hours. Although an explorative operation was performed, the left segment survived partly. A relatively satisfactory aesthetic result was obtained by transplanting split skin from the abdomen and part of the survived right avulsed scalp. According to our experience, every effort should be undertaken to save the avulsed scalp, even in severely damaged and juvenile cases.",
"title": ""
},
{
"docid": "3e749b561a67f2cc608f40b15c71098d",
"text": "As it emerged from philosophical analyses and cognitive research, most concepts exhibit typicality effects, and resist to the efforts of defining them in terms of necessary and sufficient conditions. This holds also in the case of many medical concepts. This is a problem for the design of computer science ontologies, since knowledge representation formalisms commonly adopted in this field (such as, in the first place, the Web Ontology Language OWL) do not allow for the representation of concepts in terms of typical traits. The need of representing concepts in terms of typical traits concerns almost every domain of real world knowledge, including medical domains. In particular, in this article we take into account the domain of mental disorders, starting from the DSM-5 descriptions of some specific disorders. We favour a hybrid approach to concept representation, in which ontology oriented formalisms are combined to a geometric representation of knowledge based on conceptual space. As a preliminary step to apply our proposal to mental disorder concepts, we started to develop an OWL ontology of the schizophrenia spectrum, which is as close as possible to the DSM-5 descriptions.",
"title": ""
},
{
"docid": "1dc41e5c43fc048bc1f1451eaa1ff764",
"text": "249 words) + Body (6178 words) + 4 Figures = 7,427 Total Words Luis Fernando Molina [email protected] (217) 244-6063 Esther Resendiz [email protected] (217) 244-4174 J. Riley Edwards [email protected] (217) 244-7417 John M. Hart [email protected] (217) 244-4174 Christopher P. L. Barkan [email protected] (217) 244-6338 Narendra Ahuja [email protected] (217) 333-1837 3 Corresponding author Molina et al. 11-1442 2 ABSTRACT Individual railroad track maintenance standards and the Federal Railroad Administration (FRA)Individual railroad track maintenance standards and the Federal Railroad Administration (FRA) Track Safety Standards require periodic inspection of railway infrastructure to ensure safe and efficient operation. This inspection is a critical, but labor-intensive task that results in large annual operating expenditures and has limitations in speed, quality, objectivity, and scope. To improve the cost-effectiveness of the current inspection process, machine vision technology can be developed and used as a robust supplement to manual inspections. This paper focuses on the development and performance of machine vision algorithms designed to recognize turnout components, as well as the performance of algorithms designed to recognize and detect defects in other track components. In order to prioritize which components are the most critical for the safe operation of trains, a risk-based analysis of the FRA Accident Database was performed. Additionally, an overview of current technologies for track and turnout component condition assessment is presented. The machine vision system consists of a video acquisition system for recording digital images of track and customized algorithms to identify defects and symptomatic conditions within the images. A prototype machine vision system has been developed for automated inspection of rail anchors and cut spikes, as well as tie recognition. Experimental test results from the system have shown good reliability for recognizing ties, anchors, and cut spikes. This machine vision system, in conjunction with defect analysis and trending of historical data, will enhance the ability for longer-term predictive assessment of the health of the track system and its components. Molina et al. 11-1442 3 INTRODUCTION Railroads conduct regular inspections of their track in order to maintain safe and efficient operation. In addition to internal railroad inspection procedures, periodic track inspections are required under the Federal Railroad Administration (FRA) Track Safety Standards. The objective of this research is to investigate the feasibility of developing a machine vision system to make track inspection more efficient, effective, and objective. In addition, interim approaches to automated track inspection are possible, which will potentially lead to greater inspection effectiveness and efficiency prior to full machine vision system development and implementation. Interim solutions include video capture using vehicle-mounted cameras, image enhancement using image-processing software, and assisted automation using machine vision algorithms (1). The primary focus of this research is inspection of North American Class I railroad mainline and siding tracks, as these generally experience the highest traffic densities. High traffic densities necessitate frequent inspection and more stringent maintenance requirements, and leave railroads less time to accomplish it. This makes them the most likely locations for cost-effective investment in new, more efficient, but potentially more capital-intensive inspection technology. The algorithms currently under development will also be adaptable to many types of infrastructure and usage, including transit and some components of high-speed rail (HSR) infrastructure. The machine vision system described in this paper was developed through an interdisciplinary research collaboration at the University of Illinois at Urbana-Champaign (UIUC) between the Computer Vision and Robotics Laboratory (CVRL) at the Beckman Institute for Advanced Science and Technology and the Railroad Engineering Program in the Department of Civil and Environmental Engineering. CURRENT TRACK INSPECTION TECHNOLOGIES USING MACHINE VISION The international railroad community has undertaken significant research to develop innovative applications for advanced technologies with the objective of improving the process of visual track inspection. The development of machine vision, one such inspection technology which uses video cameras, optical sensors, and custom designed algorithms, began in the early 1990’s with work analyzing rail surface defects (2). Machine vision systems are currently in use or under development for a variety of railroad inspection tasks, both wayside and mobile, including inspection of joint bars, surface defects in the rail, rail profile, ballast profile, track gauge, intermodal loading efficiency, railcar structural components, and railcar safety appliances (1, 3-21, 23). The University of Illinois at Urbana-Champaign (UIUC) has been involved in multiple railroad machine-vision research projects sponsored by the Association of American Railroads (AAR), BNSF Railway, NEXTRANS Region V Transportation Center, and the Transportation Research Board (TRB) High-Speed Rail IDEA Program (6-11). In this section, we provide a brief overview of machine vision condition monitoring applications currently in use or under development for inspection of railway infrastructure. Railway applications of machine vision technology have three main elements: the image acquisition system, the image analysis system, and the data analysis system (1). The attributes and performance of each of these individual components determines the overall performance of a machine vision system. Therefore, the following review includes a discussion of the overall Molina et al. 11-1442 4 machine vision system, as well as approaches to image acquisition, algorithm development techniques, lighting methodologies, and experimental results. Rail Surface Defects The Institute of Digital Image Processing (IDIP) in Austria has developed a machine vision system for rail surface inspection during the rail manufacturing process (12). Currently, rail inspection is carried out by humans and complemented with eddy current systems. The objective of this machine vision system is to replace visual inspections on rail production lines. The machine vision system uses spectral image differencing procedure (SIDP) to generate threedimensional (3D) images and detect surface defects in the rails. Additionally, the cameras can capture images at speeds up to 37 miles per hour (mph) (60 kilometers per hour (kph)). Although the system is currently being used only in rail production lines, it can also be attached to an inspection vehicle for field inspection of rail. Additionally, the Institute of Intelligent Systems for Automation (ISSIA) in Italy has been researching and developing a system for detecting rail corrugation (13). The system uses images of 512x2048 pixels in resolution, artificial light, and classification of texture to identify surface defects. The system is capable of acquiring images at speeds of up to 125 mph (200 kph). Three image-processing methods have been proposed and evaluated by IISA: Gabor, wavelet, and Gabor wavelet. Gabor was selected as the preferred processing technique. Currently, the technology has been implemented through the patented system known as Visual Inspection System for Railways (VISyR). Rail Wear The Moscow Metro and the State of Common Means of Moscow developed photonic system to measure railhead wear (14). The system consists of 4 CCD cameras and 4 laser lights mounted on an inspection vehicle. The cameras are connected to a central computer that receives images every 20 nanoseconds (ns). The system extracts the profile of the rail using two methods (cut-off and tangent) and the results are ultimately compared with pre-established rail wear templates. Tie Condition The Georgetown Rail Equipment Company (GREX) has developed and commercialized a crosstie inspection system called AURORA (15). The objective of the system is to inspect and classify the condition of timber and concrete crossties. Additionally, the system can be adapted to measure rail seat abrasion (RSA) and detect defects in fastening systems. AURORA uses high-definition cameras and high-voltage lasers as part of the lighting arrangement and is capable of inspecting 70,000 ties per hour at a speed of 30-45 mph (48-72 kph). The system has been shown to replicate results obtained by track inspectors with an accuracy of 88%. Since 2008, Napier University in Sweden has been researching the use of machine vision technology for inspection of timber crossties (16). Their system evaluates the condition of the ends of the ties and classifies them into one of two categories: good or bad. This classification is performed by evaluating quantitative parameters such as the number, length, and depth of cracks, as well as the condition of the tie plate. Experimental results showed that the system has an accuracy of 90% with respect to the correct classification of ties. Future research work includes evaluation of the center portion of the ties and integration with other non-destructive testing (NDT) applications. Molina et al. 11-1442 5 In 2003, the University of Zaragoza in Spain began research on the development of machine vision techniques to inspect concrete crossties using a stereo-metric system to measure different surface shapes (17). The system is used to estimate the deviation from the required dimensional tolerances of the concrete ties in production lines. Two CCD cameras with a resolution of 768x512 pixels are used for image capture and lasers are used for artificial lighting. The system has been shown to produce reliable results, but quantifiable results were not found in the available literature. Ballast The ISS",
"title": ""
},
{
"docid": "8d49e37ab80dae285dbf694ba1849f68",
"text": "In this paper we present a reference architecture for ETL stages of EDM and LA that works with different data formats and different extraction sites, ensuring privacy and making easier for new participants to enter into the process without demanding more computing power. Considering scenarios with a multitude of virtual environments hosting educational activities, accessible through a common infrastructure, we devised a reference model where data generated from interaction between users and among users and the environment itself, are selected, organized and stored in local “baskets”. Local baskets are then collected and grouped in a global basket. Organization resources like item modeling are used in both levels of basket construction. Using this reference upon a client-server architectural style, a reference architecture was developed and has been used to carry out a project for an official foundation linked to Brazilian Ministry of Education, involving educational data mining and sharing of 100+ higher education institutions and their respective virtual environments. In this architecture, a client-collector inside each virtual environment collects information from database and event logs. This information along with definitions obtained from item models are used to build local baskets. A synchronization protocol keeps all item models synced with client-collectors and server-collectors generating global baskets. This approach has shown improvements on ETL like: parallel processing of items, economy on storage space and bandwidth, privacy assurance, better tenacity, and good scalability.",
"title": ""
},
{
"docid": "3682143e9cfe7dd139138b3b533c8c25",
"text": "In brushless excitation systems, the rotating diodes can experience open- or short-circuits. For a three-phase synchronous generator under no-load, we present theoretical development of effects of diode failures on machine output voltage. Thereby, we expect the spectral response faced with each fault condition, and we propose an original algorithm for state monitoring of rotating diodes. Moreover, given experimental observations of the spectral behavior of stray flux, we propose an alternative technique. Laboratory tests have proven the effectiveness of the proposed methods for detection of fault diodes, even when the generator has been fully loaded. However, their ability to distinguish between cases of diodes interrupted and short-circuited, has been limited to the no-load condition, and certain loads of specific natures.",
"title": ""
},
{
"docid": "bffbc725b52468b41c53b156f6eadedb",
"text": "This paper presents the design and experimental evaluation of an underwater robot that is propelled by a pair of lateral undulatory fins, inspired by the locomotion of rays and cuttlefish. Each fin mechanism is comprised of three individually actuated fin rays, which are interconnected by an elastic membrane. An on-board microcontroller generates the rays’ motion pattern that result in the fins’ undulations, through which propulsion is generated. The prototype, which is fully untethered and energetically autonomous, also integrates an Inertial Measurement Unit for navigation purposes, a wireless communication module, and a video camera for recording underwater footage. Due to its small size and low manufacturing cost, the developed prototype can also serve as an educational platform for underwater robotics.",
"title": ""
}
] |
scidocsrr
|
90dfdf7c668300b32a3c8bb588465c75
|
Third-Generation Pleated Pneumatic Artificial Muscles for Robotic Applications: Development and Comparison with McKibben Muscle
|
[
{
"docid": "b2199b7be543f0f287e0cbdb7a477843",
"text": "We developed a pneumatically powered orthosis for the human ankle joint. The orthosis consisted of a carbon fiber shell, hinge joint, and two artificial pneumatic muscles. One artificial pneumatic muscle provided plantar flexion torque and the second one provided dorsiflexion torque. Computer software adjusted air pressure in each artificial muscle independently so that artificial muscle force was proportional to rectified low-pass-filtered electromyography (EMG) amplitude (i.e., proportional myoelectric control). Tibialis anterior EMG activated the artificial dorsiflexor and soleus EMG activated the artificial plantar flexor. We collected joint kinematic and artificial muscle force data as one healthy participant walked on a treadmill with the orthosis. Peak plantar flexor torque provided by the orthosis was 70 Nm, and peak dorsiflexor torque provided by the orthosis was 38 Nm. The orthosis could be useful for basic science studies on human locomotion or possibly for gait rehabilitation after neurological injury.",
"title": ""
},
{
"docid": "bb8115f8c172e22bd0ff70bd079dfa98",
"text": "This paper reports on the second generation of the Pleated Pneumatic Artificial Muscle (PPAM) which has been developed to extend the life span of its first prototype. This type of artificial was developed to overcome dry friction and material deformation which is present in the widely used McKibben type of artificial muscle. The essence of the PPAM is its pleated membrane structure which enables the muscle to work at low pressures and at large contractions. There is a growing interest in this kind of actuation for robotics applications due to its high power to weight ratio and the adaptable compliance, especially for legged locomotion and robot applications in direct contact with a human. This paper describes the design of the second generation PPAM, for which specifically the membrane layout has been changed. In function of this new layout the mathematical model, developed for the first prototype, has been reformulated. This paper gives an elaborate discussion on this mathematical model which represents the force generation and enclosed muscle volume. Static load tests on some real muscles, which have been carried out in order to validate the mathematical model, are then discussed. Furthermore are given two robotic applications which currently use these pneumatic artificial muscles. One is the biped Lucy and the another one is a manipulator application which works in direct contact with an operator.",
"title": ""
}
] |
[
{
"docid": "85da43096d4ef2dcb3f8f9ae9ea2db35",
"text": "We present an approach that combines automatic features learned by convolutional neural networks (CNN) and handcrafted features computed by the bag-of-visual-words (BOVW) model in order to achieve state-of-the-art results in facial expression recognition. To obtain automatic features, we experiment with multiple CNN architectures, pretrained models and training procedures, e.g. Dense-SparseDense. After fusing the two types of features, we employ a local learning framework to predict the class label for each test image. The local learning framework is based on three steps. First, a k-nearest neighbors model is applied for selecting the nearest training samples for an input test image. Second, a one-versus-all Support Vector Machines (SVM) classifier is trained on the selected training samples. Finally, the SVM classifier is used for predicting the class label only for the test image it was trained for. Although we used local learning in combination with handcrafted features in our previous work, to the best of our knowledge, local learning has never been employed in combination with deep features. The experiments on the 2013 Facial Expression Recognition (FER) Challenge data set and the FER+ data set demonstrate that our approach achieves state-ofthe-art results. With a top accuracy of 75.42% on the FER 2013 data set and 87.76% on the FER+ data set, we surpass all competition by more than 2% on both data sets.",
"title": ""
},
{
"docid": "211484ec722f4df6220a86580d7ecba8",
"text": "The widespread use of vision-based surveillance systems has inspired many research efforts on people localization. In this paper, a series of novel image transforms based on the vanishing point of vertical lines is proposed for enhancement of the probabilistic occupancy map (POM)-based people localization scheme. Utilizing the characteristic that the extensions of vertical lines intersect at a vanishing point, the proposed transforms, based on image or ground plane coordinate system, aims at producing transformed images wherein each standing/walking person will have an upright appearance. Thus, the degradation in localization accuracy due to the deviation of camera configuration constraint specified can be alleviated, while the computation efficiency resulted from the applicability of integral image can be retained. Experimental results show that significant improvement in POM-based people localization for more general camera configurations can indeed be achieved with the proposed image transforms.",
"title": ""
},
{
"docid": "4e071e10b9263d98061b87a7c7ceee02",
"text": "Seeking more common ground between data scientists and their critics.",
"title": ""
},
{
"docid": "37e561a8dd29299dee5de2cb7781c5a3",
"text": "The management of knowledge and experience are key means by which systematic software development and process improvement occur. Within the domain of software engineering (SE), quality continues to remain an issue of concern. Although remedies such as fourth generation programming languages, structured techniques and object-oriented technology have been promoted, a \"silver bullet\" has yet to be found. Knowledge management (KM) gives organisations the opportunity to appreciate the challenges and complexities inherent in software development. We report on two case studies that investigate KM in SE at two IT organisations. Structured interviews were conducted, with the assistance of a qualitative questionnaire. The results were used to describe current practices for KM in SE, to investigate the nature of KM activities in these organisations, and to explain the impact of leadership, technology, culture and measurement as enablers of the KM process for SE.",
"title": ""
},
{
"docid": "c446a98b6fd9fca75bb9255c6c3aadc7",
"text": "This paper describes the development of a video/game art project being produced by media artist Bill Viola in collaboration with a team from the USC Game Innovation Lab, which uses a combination of both video and game technologies to explore the universal experience of an individual's journey towards enlightenment. Here, we discuss both the creative and technical approaches to achieving the project's goals of evoking in the player the sense of undertaking a spiritual journey.",
"title": ""
},
{
"docid": "c93c690ecb038a87c351d9674f0a881a",
"text": "Foot-operated computer interfaces have been studied since the inception of human--computer interaction. Thanks to the miniaturisation and decreasing cost of sensing technology, there is an increasing interest exploring this alternative input modality, but no comprehensive overview of its research landscape. In this survey, we review the literature on interfaces operated by the lower limbs. We investigate the characteristics of users and how they affect the design of such interfaces. Next, we describe and analyse foot-based research prototypes and commercial systems in how they capture input and provide feedback. We then analyse the interactions between users and systems from the perspective of the actions performed in these interactions. Finally, we discuss our findings and use them to identify open questions and directions for future research.",
"title": ""
},
{
"docid": "e4d21fb10d9ca88902f5b0fa11dd5cc2",
"text": "We describe an efficient algorithm for releasing a provably private estimate of the degree distribution of a network. The algorithm satisfies a rigorous property of differential privacy, and is also extremely efficient, running on networks of 100 million nodes in a few seconds. Theoretical analysis shows that the error scales linearly with the number of unique degrees, whereas the error of conventional techniques scales linearly with the number of nodes. We complement the theoretical analysis with a thorough empirical analysis on real and synthetic graphs, showing that the algorithm's variance and bias is low, that the error diminishes as the size of the input graph increases, and that common analyses like fitting a power-law can be carried out very accurately.",
"title": ""
},
{
"docid": "61da3c6eaa2e140bcd218e1d81a7c803",
"text": "Sub-Resolution Assist Feature (SRAF) generation is a very important resolution enhancement technique to improve yield in modern semiconductor manufacturing process. Model- based SRAF generation has been widely used to achieve high accuracy but it is known to be time consuming and it is hard to obtain consistent SRAFs on the same layout pattern configurations. This paper proposes the first ma- chine learning based framework for fast yet consistent SRAF generation with high quality of results. Our technical con- tributions include robust feature extraction, novel feature compaction, model training for SRAF classification and pre- diction, and the final SRAF generation with consideration of practical mask manufacturing constraints. Experimental re- sults demonstrate that, compared with commercial Calibre tool, our machine learning based SRAF generation obtains 10X speed up and comparable performance in terms of edge placement error (EPE) and process variation (PV) band.",
"title": ""
},
{
"docid": "a98631b46893645a94a83995836dc71d",
"text": "This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.",
"title": ""
},
{
"docid": "470de415df4d9e4ff491381cb1991007",
"text": "Computationally predicting drug-target interactions is useful to select possible drug (or target) candidates for further biochemical verification. We focus on machine learning-based approaches, particularly similarity-based methods that use drug and target similarities, which show relationships among drugs and those among targets, respectively. These two similarities represent two emerging concepts, the chemical space and the genomic space. Typically, the methods combine these two types of similarities to generate models for predicting new drug-target interactions. This process is also closely related to a lot of work in pharmacogenomics or chemical biology that attempt to understand the relationships between the chemical and genomic spaces. This background makes the similarity-based approaches attractive and promising. This article reviews the similarity-based machine learning methods for predicting drug-target interactions, which are state-of-the-art and have aroused great interest in bioinformatics. We describe each of these methods briefly, and empirically compare these methods under a uniform experimental setting to explore their advantages and limitations.",
"title": ""
},
{
"docid": "a5ee673c895bac1a616bb51439461f5f",
"text": "OBJECTIVES\nTo summarise logistical aspects of recently completed systematic reviews that were registered in the International Prospective Register of Systematic Reviews (PROSPERO) registry to quantify the time and resources required to complete such projects.\n\n\nDESIGN\nMeta-analysis.\n\n\nDATA SOURCES AND STUDY SELECTION\nAll of the 195 registered and completed reviews (status from the PROSPERO registry) with associated publications at the time of our search (1 July 2014).\n\n\nDATA EXTRACTION\nAll authors extracted data using registry entries and publication information related to the data sources used, the number of initially retrieved citations, the final number of included studies, the time between registration date to publication date and number of authors involved for completion of each publication. Information related to funding and geographical location was also recorded when reported.\n\n\nRESULTS\nThe mean estimated time to complete the project and publish the review was 67.3 weeks (IQR=42). The number of studies found in the literature searches ranged from 27 to 92 020; the mean yield rate of included studies was 2.94% (IQR=2.5); and the mean number of authors per review was 5, SD=3. Funded reviews took significantly longer to complete and publish (mean=42 vs 26 weeks) and involved more authors and team members (mean=6.8 vs 4.8 people) than those that did not report funding (both p<0.001).\n\n\nCONCLUSIONS\nSystematic reviews presently take much time and require large amounts of human resources. In the light of the ever-increasing volume of published studies, application of existing computing and informatics technology should be applied to decrease this time and resource burden. We discuss recently published guidelines that provide a framework to make finding and accessing relevant literature less burdensome.",
"title": ""
},
{
"docid": "8b3962dc5895a46c913816f208aa8e60",
"text": "Glaucoma is the second leading cause of blindness worldwide. It is a disease in which fluid pressure in the eye increases continuously, damaging the optic nerve and causing vision loss. Computational decision support systems for the early detection of glaucoma can help prevent this complication. The retinal optic nerve fiber layer can be assessed using optical coherence tomography, scanning laser polarimetry, and Heidelberg retina tomography scanning methods. In this paper, we present a novel method for glaucoma detection using a combination of texture and higher order spectra (HOS) features from digital fundus images. Support vector machine, sequential minimal optimization, naive Bayesian, and random-forest classifiers are used to perform supervised classification. Our results demonstrate that the texture and HOS features after z-score normalization and feature selection, and when combined with a random-forest classifier, performs better than the other classifiers and correctly identifies the glaucoma images with an accuracy of more than 91%. The impact of feature ranking and normalization is also studied to improve results. Our proposed novel features are clinically significant and can be used to detect glaucoma accurately.",
"title": ""
},
{
"docid": "91d0f12e9303b93521146d4d650a63df",
"text": "We utilize the state-of-the-art in deep learning to show that we can learn by example what constitutes humor in the context of a Yelp review. To the best of the authors knowledge, no systematic study of deep learning for humor exists – thus, we construct a scaffolded study. First, we use “shallow” methods such as Random Forests and Linear Discriminants built on top of bag-of-words and word vector features. Then, we build deep feedforward networks on top of these features – in some sense, measuring how much of an effect basic feedforward nets help. Then, we use recurrent neural networks and convolutional neural networks to more accurately model the sequential nature of a review.",
"title": ""
},
{
"docid": "22beed9d31913f09e81063dbcb751c42",
"text": "In this paper an approach for 360 degree multi sensor fusion for static and dynamic obstacles is presented. The perception of static and dynamic obstacles is achieved by combining the advantages of model based object tracking and an occupancy map. For the model based object tracking a novel multi reference point tracking system, called best knowledge model, is introduced. The best knowledge model allows to track and describe objects with respect to a best suitable reference point. It is explained how the object tracking and the occupancy map closely interact and benefit from each other. Experimental results of the 360 degree multi sensor fusion system from an automotive test vehicle are shown.",
"title": ""
},
{
"docid": "dcdb6242febbef358efe5a1461957291",
"text": "Neuromorphic Engineering has emerged as an exciting research area, primarily owing to the paradigm shift from conventional computing architectures to data-driven, cognitive computing. There is a diversity of work in the literature pertaining to neuromorphic systems, devices and circuits. This review looks at recent trends in neuromorphic engineering and its sub-domains, with an attempt to identify key research directions that would assume significance in the future. We hope that this review would serve as a handy reference to both beginners and experts, and provide a glimpse into the broad spectrum of applications of neuromorphic hardware and algorithms. Our survey indicates that neuromorphic engineering holds a promising future, particularly with growing data volumes, and the imminent need for intelligent, versatile computing.",
"title": ""
},
{
"docid": "d4e652097c6e3b7c265adf4848471d19",
"text": "The usage of Unmanned Aerial Vehicles (UAVs) is increasing day by day. In recent years, UAVs are being used in increasing number of civil applications, such as policing, fire-fighting, etc in addition to military applications. Instead of using one large UAV, multiple UAVs are nowadays used for higher coverage area and accuracy. Therefore, networking models are required to allow two or more UAV nodes to communicate directly or via relay node(s). Flying Ad-Hoc Networks (FANETs) are formed which is basically an ad hoc network for UAVs. This is relatively a new technology in network family where requirements vary largely from traditional networking model, such as Mobile Ad-hoc Networks and Vehicular Ad-hoc Networks. In this paper, Flying Ad-Hoc Networks are surveyed along with its challenges compared to traditional ad hoc networks. The existing routing protocols for FANETs are then classified into six major categories which are critically analyzed and compared based on various performance criteria. Our comparative analysis will help network engineers in choosing appropriate routing protocols based on the specific scenario where the FANET will be deployed.",
"title": ""
},
{
"docid": "e9f6216d30871debcccc04cc12e53fda",
"text": "We propose a local coherence model that captures the flow of what semantically connects adjacent sentences in a text. We represent the semantics of a sentence by a vector and capture its state at each word of the sentence. We model what relates two adjacent sentences based on the two most similar semantic states, each of which is in one of the sentences. We encode the perceived coherence of a text by a vector, which represents patterns of changes in salient information that relates adjacent sentences. Our experiments demonstrate that our approach is beneficial for two downstream tasks: Readability assessment, in which our model achieves new state-of-the-art results; and essay scoring, in which the combination of our coherence vectors and other taskdependent features significantly improves the performance of a strong essay scorer.",
"title": ""
},
{
"docid": "22bed4d5c38a096ae24a76dce7fc5136",
"text": "BACKGROUND\nMedical Image segmentation is an important image processing step. Comparing images to evaluate the quality of segmentation is an essential part of measuring progress in this research area. Some of the challenges in evaluating medical segmentation are: metric selection, the use in the literature of multiple definitions for certain metrics, inefficiency of the metric calculation implementations leading to difficulties with large volumes, and lack of support for fuzzy segmentation by existing metrics.\n\n\nRESULT\nFirst we present an overview of 20 evaluation metrics selected based on a comprehensive literature review. For fuzzy segmentation, which shows the level of membership of each voxel to multiple classes, fuzzy definitions of all metrics are provided. We present a discussion about metric properties to provide a guide for selecting evaluation metrics. Finally, we propose an efficient evaluation tool implementing the 20 selected metrics. The tool is optimized to perform efficiently in terms of speed and required memory, also if the image size is extremely large as in the case of whole body MRI or CT volume segmentation. An implementation of this tool is available as an open source project.\n\n\nCONCLUSION\nWe propose an efficient evaluation tool for 3D medical image segmentation using 20 evaluation metrics and provide guidelines for selecting a subset of these metrics that is suitable for the data and the segmentation task.",
"title": ""
},
{
"docid": "83c407843732c4d237ff6e07da40297f",
"text": "Although deep reinforcement learning has achieved great success recently, there are still challenges in Real Time Strategy (RTS) games. Due to its large state and action space, as well as hidden information, RTS games require macro strategies as well as micro level manipulation to obtain satisfactory performance. In this paper, we present a novel hierarchical reinforcement learning model for mastering Multiplayer Online Battle Arena (MOBA) games, a sub-genre of RTS games. In this hierarchical framework, agents make macro strategies by imitation learning and do micromanipulations through reinforcement learning. Moreover, we propose a simple self-learning method to get better sample efficiency for reinforcement part and extract some global features by multi-target detection method in the absence of game engine or API. In 1v1 mode, our agent successfully learns to combat and defeat built-in AI with 100% win rate, and experiments show that our method can create a competitive multi-agent for a kind of mobile MOBA game King of Glory (KOG) in 5v5 mode.",
"title": ""
},
{
"docid": "91f390e8ea6c931dff1e1d171cede590",
"text": "Deep neural networks are state of the art methods for many learning tasks due to their ability to extract increasingly better features at each network layer. However, the improved performance of additional layers in a deep network comes at the cost of added latency and energy usage in feedforward inference. As networks continue to get deeper and larger, these costs become more prohibitive for real-time and energy-sensitive applications. To address this issue, we present BranchyNet, a novel deep network architecture that is augmented with additional side branch classifiers. The architecture allows prediction results for a large portion of test samples to exit the network early via these branches when samples can already be inferred with high confidence. BranchyNet exploits the observation that features learned at an early layer of a network may often be sufficient for the classification of many data points. For more difficult samples, which are expected less frequently, BranchyNet will use further or all network layers to provide the best likelihood of correct prediction. We study the BranchyNet architecture using several well-known networks (LeNet, AlexNet, ResNet) and datasets (MNIST, CIFAR10) and show that it can both improve accuracy and significantly reduce the inference time of the network.",
"title": ""
}
] |
scidocsrr
|
76519c229292246b6531b08ec5e803b6
|
Detection and localization of persons behind obstacles using M-sequence through-the-wall radar
|
[
{
"docid": "9ffaf53e8745d1f7f5b7ff58c77602c6",
"text": "Background subtraction is a widely used approach for detecting moving objects from static cameras. Many different methods have been proposed over the recent years and both the novice and the expert can be confused about their benefits and limitations. In order to overcome this problem, this paper provides a review of the main methods and an original categorisation based on speed, memory requirements and accuracy. Such a review can effectively guide the designer to select the most suitable method for a given application in a principled way. Methods reviewed include parametric and non-parametric background density estimates and spatial correlation approaches.",
"title": ""
}
] |
[
{
"docid": "039044aaa25f047e28daba08237c0de5",
"text": "BI technologies are essential to running today's businesses and this technology is going through sea changes.",
"title": ""
},
{
"docid": "09b86e959a0b3fa28f9d3462828bbc31",
"text": "Industry 4.0 has become more popular due to recent developments in cyber-physical systems, big data, cloud computing, and industrial wireless networks. Intelligent manufacturing has produced a revolutionary change, and evolving applications, such as product lifecycle management, are becoming a reality. In this paper, we propose and implement a manufacturing big data solution for active preventive maintenance in manufacturing environments. First, we provide the system architecture that is used for active preventive maintenance. Then, we analyze the method used for collection of manufacturing big data according to the data characteristics. Subsequently, we perform data processing in the cloud, including the cloud layer architecture, the real-time active maintenance mechanism, and the offline prediction and analysis method. Finally, we analyze a prototype platform and implement experiments to compare the traditionally used method with the proposed active preventive maintenance method. The manufacturing big data method used for active preventive maintenance has the potential to accelerate implementation of Industry 4.0.",
"title": ""
},
{
"docid": "9ec033063981d42a6d901de00179b433",
"text": "Face landmarking, defined as the detection and localization of certain characteristic points on the face, is an important intermediary step for many subsequent face processing operations that range from biometric recognition to the understanding of mental states. Despite its conceptual simplicity, this computer vision problem has proven extremely challenging due to inherent face variability as well as the multitude of confounding factors such as pose, expression, illumination and occlusions. The purpose of this survey is to give an overview of landmarking algorithms and their progress over the last decade, categorize them and show comparative performance statistics of the state of the art. We discuss the main trends and indicate current shortcomings with the expectation that this survey will provide further impetus for the much needed high-performance, real-life face landmarking operating at video rates.",
"title": ""
},
{
"docid": "1ae3eb81ae75f6abfad4963ee0056be5",
"text": "Due to the shared responsibility model of clouds, tenants have to manage the security of their workloads and data. Developing security solutions using VMs or containers creates further problems as these resources also need to be secured. In this paper, we advocate for taking a serverless approach by proposing six serverless design patterns to build security services in the cloud. For each design pattern, we describe the key advantages and present applications and services utilizing the pattern. Using the proposed patterns as building blocks, we introduce a threat-intelligence platform that collects logs from various sources, alerts malicious activities, and takes actions against such behaviors. We also discuss the limitations of serverless design and how future implementations can overcome those limitations.",
"title": ""
},
{
"docid": "8c0e5e48c8827a943f4586b8e75f4f9d",
"text": "Predicting the results of football matches poses an interesting challenge due to the fact that the sport is so popular and widespread. However, predicting the outcomes is also a difficult problem because of the number of factors which must be taken into account that cannot be quantitatively valued or modeled. As part of this work, a software solution has been developed in order to try and solve this problem. During the development of the system, a number of tests have been carried out in order to determine the optimal combination of features and classifiers. The results of the presented system show a satisfactory capability of prediction which is superior to the one of the reference method (most likely a priori outcome).",
"title": ""
},
{
"docid": "81f9e48116b089f3aea6ed1887f5c438",
"text": "On the international level ISO (International Organization for Standardization), CEN (European Committee for Standardization) and ASHRAE (American Society of Heating, Refrigerating and Air Conditioning Engineers) are writing standards related to the indoor environment. This presentation will focus on the development of standards for the indoor thermal environment and indoor air quality. In the future, recommendations for acceptable indoor environments will be specified as classes. This allows for national differences in the requirements and also for designing buildings for different quality levels. This will require a better dialogue between the client (builder, owner) and the designer. It is also being discussed how people can adapt to accept higher indoor temperatures during summer in naturally ventilated (free running) buildings. Several of these standards have been developed mainly by experts from Europe, North America and Japan, thus guaranteeing a worldwide basis. Are there, however, special considerations related to other parts of the world (lifestyle, outdoor climate, and economy), which are not dealt with in these standards and which will require revision? Critical issues such as adaptation, effect of increased air velocity, humidity, type of indoor pollutant sources etc. are still being discussed, but in general these standards can be used worldwide. It is nevertheless important to take into account people’s clothing related to regional traditions and season.",
"title": ""
},
{
"docid": "6838cf1310f0321cd524bb1120f35057",
"text": "One of the most compelling visions of future robots is that of the robot butler. An entity dedicated to fulfilling your every need. This obviously has its benefits, but there could be a flipside to this vision. To fulfill the needs of its users, it must first be aware of them, and so it could potentially amass a huge amount of personal data regarding its user, data which may or may not be safe from accidental or intentional disclosure to a third party. How may prospective owners of a personal robot feel about the data that might be collected about them? In order to investigate this issue experimentally, we conducted an exploratory study where 12 participants were exposed to an HRI scenario in which disclosure of personal information became an issue. Despite the small sample size interesting results emerged from this study, indicating how future owners of personal robots feel regarding what the robot will know about them, and what safeguards they believe should be in place to protect owners from unwanted disclosure of private information.",
"title": ""
},
{
"docid": "78a29e0e00aa65517a70fc17293e84c4",
"text": "The model parameters of convolutional neural networks (CNNs) are determined by backpropagation (BP). In this work, we propose an interpretable feedforward (FF) design without any BP as a reference. The FF design adopts a data-centric approach. It derives network parameters of the current layer based on data statistics from the output of the previous layer in a one-pass manner. To construct convolutional layers, we develop a new signal transform, called the Saab (Subspace approximation with adjusted bias) transform. It is a variant of the principal component analysis (PCA) with an added bias vector to annihilate activation’s nonlinearity. Multiple Saab transforms in cascade yield multiple convolutional layers. As to fully-connected (FC) layers, we construct them using a cascade of multi-stage linear least squared regressors (LSRs). The classification and robustness (against adversarial attacks) performances of BPand FF-designed CNNs applied to the MNIST and the CIFAR-10 datasets are compared. Finally, we comment on the relationship between BP and FF designs.",
"title": ""
},
{
"docid": "6347907900a9f2f6e2d7679705c03e0c",
"text": "AIM\nCerebro-spinal fluid (CSF) leakage caused by defects on the dura mater after trauma or some neurosurgical interventions is an important issue. In this study, we investigated the effects of local and systemic use of phenytoin sodium on dural healing.\n\n\nMATERIAL AND METHODS\nThirty-six male Wistar rats were divided into control, local phenytoin and systemic phenytoin groups with 12 rats in each. For each group, a dura defect was created at thoracic segment. Subjects were sacrificed at following 1st and 6th weeks and damaged segments were isolated. The results were compared histopathologically by Hematoxylin-Eosin and Masson-Trichrome staining. Criteria for the rate of collagen, neovascularization, and granulation formation were assessed semi quantitatively according to the histological assessment scale modified by Ozisik et al.\n\n\nRESULTS\nBetter healing was achieved in the systemic and local phenytoin groups than in the control group. The level of healing was significantly higher in the systemic group in both early and late periods than in other groups (p < 0.01). The level of healing in the late-local group was also statistically significantly higher than that in the control group.\n\n\nCONCLUSION\nWe observed that both systemic and local uses of phenytoin sodium (especially systemic) have positive effects on dura healing.",
"title": ""
},
{
"docid": "6cfc078d0b908cb020417d4503e5bade",
"text": "How does an entrepreneur’s social network impact crowdfunding? Based on social capital theory, we developed a research model and conducted a comparative study using objective data collected from China and the U.S. We found that an entrepreneur’s social network ties, obligations to fund other entrepreneurs, and the shared meaning of the crowdfunding project between the entrepreneur and the sponsors had significant effects on crowdfunding performance in both China and the U.S. The predictive power of the three dimensions of social capital was stronger in China than it was in the U.S. Obligation also had a greater impact in China. 2014 Elsevier B.V. All rights reserved. § This study is supported by the Natural Science Foundation of China (71302186), the Chinese Ministry of Education Humanities and Social Sciences Young Scholar Fund (12YJCZH306), the China National Social Sciences Fund (11AZD077), and the Fundamental Research Funds for the Central Universities (JBK120505). * Corresponding author. Tel.: +1 218 726 7334. E-mail addresses: [email protected] (H. Zheng), [email protected] (D. Li), [email protected] (J. Wu), [email protected] (Y. Xu).",
"title": ""
},
{
"docid": "3e7d7fade4bd3f2a3684d4348520bdb7",
"text": "Training triplet networks with large-scale data is challenging in face recognition. Due to the number of possible triplets explodes with the number of samples, previous studies adopt the online hard negative mining(OHNM) to handle it. However, as the number of identities becomes extremely large, the training will suffer from bad local minima because effective hard triplets are difficult to be found. To solve the problem, in this paper, we propose training triplet networks with subspace learning, which splits the space of all identities into subspaces consisting of only similar identities. Combined with the batch OHNM, hard triplets can be found much easier. Experiments on the large-scale MS-Celeb-1M challenge with 100 K identities demonstrate that the proposed method can largely improve the performance. In addition, to deal with heavy noise and large-scale retrieval, we also make some efforts on robust noise removing and efficient image retrieval, which are used jointly with the subspace learning to obtain the state-of-the-art performance on the MS-Celeb-1M competition (without external data in Challenge1).",
"title": ""
},
{
"docid": "8a1d0d2767a35235fa5ac70818ec92e7",
"text": "This work demonstrates two 94 GHz SPDT quarter-wave shunt switches using saturated SiGe HBTs. A new mode of operation, called reverse saturation, using the emitter at the RF output node of the switch, is utilized to take advantage of the higher emitter doping and improved isolation from the substrate. The switches were designed in a 180 nm SiGe BiCMOS technology featuring 90 nm SiGe HBTs (selective emitter shrink) with fT/fmax of 250/300+ GHz. The forward-saturated switch achieves an insertion loss and isolation at 94 GHz of 1.8 dB and 19.3 dB, respectively. The reverse-saturated switch achieves a similar isolation, but reduces the insertion loss to 1.4 dB. This result represents a 30% improvement in insertion loss in comparison to the best CMOS SPDT at 94 GHz.",
"title": ""
},
{
"docid": "0cb2c9d4f7c54450bddd84eed70ed403",
"text": "The well-known Mori-Zwanzig theory tells us that model reduction leads to memory effect. For a long time, modeling the memory effect accurately and efficiently has been an important but nearly impossible task in developing a good reduced model. In this work, we explore a natural analogy between recurrent neural networks and the Mori-Zwanzig formalism to establish a systematic approach for developing reduced models with memory. Two training models-a direct training model and a dynamically coupled training model-are proposed and compared. We apply these methods to the Kuramoto-Sivashinsky equation and the Navier-Stokes equation. Numerical experiments show that the proposed method can produce reduced model with good performance on both short-term prediction and long-term statistical properties. In science and engineering, many high-dimensional dynamical systems are too complicated to solve in detail. Nor is it necessary since usually we are only interested in a small subset of the variables representing the gross behavior of the system. Therefore, it is useful to develop reduced models which can approximate the variables of interest without solving the full system. This is the celebrated model reduction problem. Even though model reduction has been widely explored in many fields, to this day there is still a lack of systematic and reliable methodologies for model reduction. One has to rely on uncontrolled approximations in order to move things forward. On the other hand, there is in principle a rather solid starting point, the Mori-Zwanzig (M-Z) theory, for performing model reduction [1], [2]. In M-Z, the effect of unresolved variables on resolved ones is represented as a memory and a noise term, giving rise to the so-called generalized Langevin equation (GLE). Solving the GLE accurately is almost equivalent to solving the full system, because the memory kernel and noise terms contain the full information for the unresolved variables. This means that the M-Z theory does not directly lead to a reduction of complexity or the computational cost. However, it does provide a starting point for making approximations. In this regard, we mention in particular the t-model proposed by Chorin et al [3]. In [4] reduced models of the viscous Burgers equation and 3-dimensional Navier-Stokes equation were developed by analytically approximating the memory kernel in the GLE using the trapezoidal integration scheme. Li and E [5] developed approximate boundary conditions for molecular dynamics using linear approximation of the M-Z formalism. In [6], auxiliary variables are used to deal with the non-Markovian dynamics of the GLE. Despite all of these efforts, it is fair to say that there is still a lack of systematic and reliable procedure for approximating the GLE. In fact, dealing with the memory terms explicitly does not seem to be a promising approach for deriving systematic and reliable approximations to the GLE. ∗The Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544, USA †Department of Mechanics and Aerospace Engineering, Southern University of Science and Technology, Shenzhen 518055, Peoples Republic of China ‡Beijing Institute of Big Data Research, Beijing, 100871, P.R. China 1 ar X iv :1 80 8. 04 25 8v 1 [ cs .L G ] 1 0 A ug 2 01 8 One of the most successful approaches for representing memory effects has been the recurrent neural networks (RNN) in machine learning. Indeed there is a natural analogy between RNN and M-Z. The hidden states in RNN can be viewed as a reduced representation of the unresolved variables in M-Z. We can then view RNN as a way of performing dimension reduction in the space of the unresolved variables. In this paper, we explore the possibility of performing model reduction using RNNs. We will limit ourselves to the situation when the original model is in the form of a conservative partial differential equation (PDE), the reduced model is an averaged version of the original PDE. The crux of the matter is then the accurate representation of the unresolved flux term. We propose two kinds of models. In the first kind, the unresolved flux terms in the equation are learned from data. This flux model is then used in the averaged equation to form the reduced model. We call this the direct training model. A second approach, which we call the coupled training model, is to train the neural network together with the averaged equation. From the viewpoint of machine learning, the objective in the direct training model is to fit the unresolved flux. The objective in the coupled training model is to fit the resolved variables (the averaged quantities). For application, we focus on the Kuramoto-Sivashinsky (K-S) equation and the Navier-Stokes (N-S) equation. The K-S equation writes as ∂u ∂t + 1 2 ∂u ∂x + ∂u ∂x2 + ∂u ∂x4 = 0, x ∈ R, t > 0; (1) u(x, t) = u(x+ L, t), u(x, 0) = g(x). (2) We are interested in a low-pass filtered solution of the K-S equation, ū, and want to develop a reduced system for ū. In general, ū can be written as the convolution of u with a low pass filter G(y):",
"title": ""
},
{
"docid": "a49abd0b1c03e39c83d9809fc344ba93",
"text": "Controller Area Network (CAN) is the leading serial bus system for embedded control. More than two billion CAN nodes have been sold since the protocol's development in the early 1980s. CAN is a mainstream network and was internationally standardized (ISO 11898–1) in 1993. This paper describes an approach to implementing security services on top of a higher level Controller Area Network (CAN) protocol, in particular, CANopen. Since the CAN network is an open, unsecured network, every node has access to all data on the bus. A system which produces and consumes sensitive data is not well suited for this environment. Therefore, a general-purpose security solution is needed which will allow secure nodes access to the basic security services such as authentication, integrity, and confidentiality.",
"title": ""
},
{
"docid": "2ab126b03a9cf3dd45c7d2342786326a",
"text": "Most existing techniques for spam detection on Twitter aim to identify and block users who post spam tweets. In this paper, we propose a semi-supervised spam detection (S3D) framework for spam detection at tweet-level. The proposed framework consists of two main modules: spam detection module operating in real-time mode and model update module operating in batch mode. The spam detection module consists of four lightweight detectors: 1) blacklisted domain detector to label tweets containing blacklisted URLs; 2) near-duplicate detector to label tweets that are near-duplicates of confidently prelabeled tweets; 3) reliable ham detector to label tweets that are posted by trusted users and that do not contain spammy words; and 4) multiclassifier-based detector labels the remaining tweets. The information required by the detection module is updated in batch mode based on the tweets that are labeled in the previous time window. Experiments on a large-scale data set show that the framework adaptively learns patterns of new spam activities and maintain good accuracy for spam detection in a tweet stream.",
"title": ""
},
{
"docid": "c904e36191df6989a5f38a52bc206342",
"text": "In present paper we proposed a simple and effective method to compress an image. Here we found success in size reduction of an image without much compromising with it’s quality. Here we used Haar Wavelet Transform to transform our original image and after quantization and thresholding of DWT coefficients Run length coding and Huffman coding schemes have been used to encode the image. DWT is base for quite populate JPEG 2000 technique. Keywords—lossy compression, DWT, quantization, Run length coding, Huffman coding, JPEG2000",
"title": ""
},
{
"docid": "cc28f89b2289c2b461331b22866a5285",
"text": "BACKGROUND\nCognitive-behavioural therapy (CBT)-based guided self-help (GSH) has been suggested to be an effective intervention for mild to moderate anxiety and depression, yet the evidence seems inconclusive, with some studies reporting that GSH is effective and others finding that GSH is ineffective. GSH differs in important respects from other levels of self-help, yet the literature regarding exclusively guided self-help interventions for anxiety and depression has not been reviewed systematically.\n\n\nMETHOD\nA literature search for randomized controlled trials (RCTs) examining CBT-based GSH interventions for anxiety and depressive disorders was conducted. Multiple electronic databases were searched; several journals spanning key disciplines were hand-searched; reference lists of included review articles were scanned and relevant first authors were contacted.\n\n\nRESULTS\nThirteen studies met the inclusion criteria. Meta-analysis indicated the effectiveness of GSH at post-treatment, although GSH was found to have limited effectiveness at follow-up or among more clinically representative samples. Studies that reported greater effectiveness of GSH tended to be of lower methodological quality and generally involved participants who were self-selected rather than recruited through clinical referrals.\n\n\nCONCLUSIONS\nAlthough there is support for the effectiveness of CBT-based GSH among media-recruited individuals, the finding that the reviewed RCTs had limited effectiveness within routine clinical practice demonstrates that the evidence is not conclusive. Further rigorous evidence based on clinical populations that examines longer-term outcomes is required before CBT-based GSH interventions can be deemed effective for adults accessing primary care services for treatment of anxiety and depression.",
"title": ""
},
{
"docid": "b9ef363fc7563dd14b3a4fd781d76d91",
"text": "Deep learning (DL)-based Reynolds stress with its capability to leverage values of large data can be used to close Reynolds-averaged Navier-Stoke (RANS) equations. Type I and Type II machine learning (ML) frameworks are studied to investigate data and flow feature requirements while training DL-based Reynolds stress. The paper presents a method, flow features coverage mapping (FFCM), to quantify the physics coverage of DL-based closures that can be used to examine the sufficiency of training data points as well as input flow features for data-driven turbulence models. Three case studies are formulated to demonstrate the properties of Type I and Type II ML. The first case indicates that errors of RANS equations with DL-based Reynolds stress by Type I ML are accumulated along with the simulation time when training data do not sufficiently cover transient details. The second case uses Type I ML to show that DL can figure out time history of flow transients from data sampled at various times. The case study also shows that the necessary and sufficient flow features of DL-based closures are first-order spatial derivatives of velocity fields. The last case demonstrates the limitation of Type II ML for unsteady flow simulation. Type II ML requires initial conditions to be sufficiently close to reference data. Then reference data can be used to improve RANS simulation.",
"title": ""
}
] |
scidocsrr
|
8e4ecdb6f44886d76310e61afb7a28aa
|
Soccer Event Detection
|
[
{
"docid": "3b300b9275b6da1aff685e5ca9b71252",
"text": "This paper presents an algorithm developed based on hidden Markov model for cues fusion and event inference in soccer video. Four events, shoot, foul, offside and normal playing, are defined to be detected. The states of the events are employed to model the observations of the five cues, which are extracted from the shot sequences directly. The experimental results show the algorithm is effective and robust in inferring events from roughly extracted cues.",
"title": ""
}
] |
[
{
"docid": "cf5d0f7079bd7bc1a197573e28b5569a",
"text": "More and more people rely on mobile devices to access the Internet, which also increases the amount of private information that can be gathered from people's devices. Although today's smartphone operating systems are trying to provide a secure environment, they fail to provide users with adequate control over and visibility into how third-party applications use their private data. Whereas there are a few tools that alert users when applications leak private information, these tools are often hard to use by the average user or have other problems. To address these problems, we present PrivacyGuard, an open-source VPN-based platform for intercepting the network traffic of applications. PrivacyGuard requires neither root permissions nor any knowledge about VPN technology from its users. PrivacyGuard does not significantly increase the trusted computing base since PrivacyGuard runs in its entirety on the local device and traffic is not routed through a remote VPN server. We implement PrivacyGuard on the Android platform by taking advantage of the VPNService class provided by the Android SDK.\n PrivacyGuard is configurable, extensible, and useful for many different purposes. We investigate its use for detecting the leakage of multiple types of sensitive data, such as a phone's IMEI number or location data. PrivacyGuard also supports modifying the leaked information and replacing it with crafted data for privacy protection. According to our experiments, PrivacyGuard can detect more leakage incidents by applications and advertisement libraries than TaintDroid. We also demonstrate that PrivacyGuard has reasonable overhead on network performance and almost no overhead on battery consumption.",
"title": ""
},
{
"docid": "e1958dc823feee7f88ab5bf256655bee",
"text": "We describe an approach for testing a software system for possible securi ty flaws. Traditionally, security testing is done using penetration analysis and formal methods. Based on the observation that most security flaws are triggered due to a flawed interaction with the envi ronment, we view the security testing problem as the problem of testing for the fault-tolerance prop erties of a software system. We consider each environment perturbation as a fault and the resulting security ompromise a failure in the toleration of such faults. Our approach is based on the well known techn ique of fault-injection. Environment faults are injected into the system under test and system beha vior observed. The failure to tolerate faults is an indicator of a potential security flaw in the syst em. An Environment-Application Interaction (EAI) fault model is proposed. EAI allows us to decide what f aults to inject. Based on EAI, we present a security-flaw classification scheme. This scheme was used to classif y 142 security flaws in a vulnerability database. This classification revealed that 91% of the security flaws in the database are covered by the EAI model.",
"title": ""
},
{
"docid": "ce0db8d66736076085ace654a6847b11",
"text": "Barcode technology is one of the most important parts of Automatic Identification and Data Capture (AIDC). Quick Response code (QR code) is one of the most popular types of two-dimensional barcodes. How to decode various QR code images efficiently and accurately is a challenge. In this paper, we revise the traditional decoding procedure by proposing a serial of carefully designed preprocessing methods. The decoding procedure consists of image binarization, QR code extraction, perspective transformation and resampling, and error correction. By these steps, we can recognize different types of QR code images. The experiment results show that our method has better accuracy than Google open-source 1D/2D barcode image processing library Zxing-2.1. Moreover, we evaluate the execution time for different-size images. Our method can decode these images in real time.",
"title": ""
},
{
"docid": "51f686a1056f389ff69855887e3f4f3b",
"text": "Pipelining has been used in the design of many PRAM algorithms to reduce their asymptotic running time. Paul, Vishkin, and Wagener (PVW) used the approach in a parallel implementation of 2-3 trees. The approach was later used by Cole in the first O( lg n) time sorting algorithm on the PRAM not based on the AKS sorting network, and has since been used to improve the time of several other algorithms. Although the approach has improved the asymptotic time of many algorithms, there are two practical problems: maintaining the pipeline is quite complicated for the programmer, and the pipelining forces highly synchronous code execution. Synchronous execution is less practical on asynchronous machines and makes it difficult to modify a schedule to use less memory or to take better advantage of locality. In this paper we show how futures (a parallel language construct) can be used to implement pipelining without requiring the user to code it explicitly, allowing for much simpler code and more asynchronous execution. A runtime system manages the pipelining implicitly. As with user-managed pipelining, we show how the technique reduces the depth of many algorithms by a logarithmic factor over the nonpipelined version. We describe and analyze four algorithms for which this is the case: a parallel merging algorithm on trees, parallel algorithms for finding the union and difference of two randomized balanced trees (treaps), and insertion into a variant of the PVW 2-3 trees. For three of these, the pipeline delays are data dependent making them particularly difficult to pipeline by hand. To determine the runtime of algorithms we first analyze the algorithms in a language-based cost model in terms of the work w and depth d of the computations, and then show universal bounds for implementing the language on various machine models.",
"title": ""
},
{
"docid": "7182dfe75bc09df526da51cd5c8c8d20",
"text": "Rapid progress has been made towards question answering (QA) systems that can extract answers from text. Existing neural approaches make use of expensive bidirectional attention mechanisms or score all possible answer spans, limiting scalability. We propose instead to cast extractive QA as an iterative search problem: select the answer’s sentence, start word, and end word. This representation reduces the space of each search step and allows computation to be conditionally allocated to promising search paths. We show that globally normalizing the decision process and back-propagating through beam search makes this representation viable and learning efficient. We empirically demonstrate the benefits of this approach using our model, Globally Normalized Reader (GNR), which achieves the second highest single model performance on the Stanford Question Answering Dataset (68.4 EM, 76.21 F1 dev) and is 24.7x faster than bi-attention-flow. We also introduce a data-augmentation method to produce semantically valid examples by aligning named entities to a knowledge base and swapping them with new entities of the same type. This method improves the performance of all models considered in this work and is of independent interest for a variety of NLP tasks.",
"title": ""
},
{
"docid": "ead92535c188bebd2285358c83fc0a07",
"text": "BACKGROUND\nIndigenous peoples of Australia, Canada, United States and New Zealand experience disproportionately high rates of suicide. As such, the methodological quality of evaluations of suicide prevention interventions targeting these Indigenous populations should be rigorously examined, in order to determine the extent to which they are effective for reducing rates of Indigenous suicide and suicidal behaviours. This systematic review aims to: 1) identify published evaluations of suicide prevention interventions targeting Indigenous peoples in Australia, Canada, United States and New Zealand; 2) critique their methodological quality; and 3) describe their main characteristics.\n\n\nMETHODS\nA systematic search of 17 electronic databases and 13 websites for the period 1981-2012 (inclusive) was undertaken. The reference lists of reviews of suicide prevention interventions were hand-searched for additional relevant studies not identified by the electronic and web search. The methodological quality of evaluations of suicide prevention interventions was assessed using a standardised assessment tool.\n\n\nRESULTS\nNine evaluations of suicide prevention interventions were identified: five targeting Native Americans; three targeting Aboriginal Australians; and one First Nation Canadians. The main intervention strategies employed included: Community Prevention, Gatekeeper Training, and Education. Only three of the nine evaluations measured changes in rates of suicide or suicidal behaviour, all of which reported significant improvements. The methodological quality of evaluations was variable. Particular problems included weak study designs, reliance on self-report measures, highly variable consent and follow-up rates, and the absence of economic or cost analyses.\n\n\nCONCLUSIONS\nThere is an urgent need for an increase in the number of evaluations of preventive interventions targeting reductions in Indigenous suicide using methodologically rigorous study designs across geographically and culturally diverse Indigenous populations. Combining and tailoring best evidence and culturally-specific individual strategies into one coherent suicide prevention program for delivery to whole Indigenous communities and/or population groups at high risk of suicide offers considerable promise.",
"title": ""
},
{
"docid": "24ea64d86683370bd39c084f3ac94f94",
"text": "Natural Language Understanding (NLU) systems need to encode human generated text (or speech) and reason over it at a deep semantic level. Any NLU system typically involves two main components: The first is an encoder, which composes words (or other basic linguistic units) within the input utterances to compute encoded representations, that are then used as features in the second component, a predictor, to reason over the encoded inputs and produce the desired output. We argue that performing these two steps over the utterances alone is seldom sufficient for understanding language, as the utterances themselves do not contain all the information needed for understanding them. We identify two kinds of additional knowledge needed to fill the gaps: background knowledge and contextual knowledge. The goal of this thesis is to build end-to-end NLU systems that encode inputs along with relevant background knowledge, and reason about them in the presence of contextual knowledge. The first part of the thesis deals with background knowledge. While distributional methods for encoding inputs have been used to represent meaning of words in the context of other words in the input, there are other aspects of semantics that are out of their reach. These are related to commonsense or real world information which is part of shared human knowledge but is not explicitly present in the input. We address this limitation by having the encoders also encode background knowledge, and present two approaches for doing so. The first is by modeling the selectional restrictions verbs place on their semantic role fillers. We use this model to encode events, and show that these event representations are useful in detecting newswire anomalies. Our second approach towards augmenting distributional methods is to use external knowledge bases like WordNet. We compute ontologygrounded token-level representations of words and show that they are useful in predicting prepositional phrase attachments and textual entailment. The second part of the thesis focuses on contextual knowledge. Machine comprehension tasks require interpreting input utterances in the context of other structured or unstructured information. This can be challenging for multiple reasons. Firstly, given some task-specific data, retrieving the relevant contextual knowledge from it can be a serious problem. Secondly, even when the relevant contextual knowledge is provided, reasoning over it might require executing a complex series of operations depending on the structure of the context and the compositionality of the input language. To handle reasoning over contexts, we first describe a type constrained neural semantic parsing framework for question answering (QA). We achieve state of the art performance on WIKITABLEQUESTIONS, a dataset with highly compositional questions over semi-structured tables. Proposed work in this area includes application of this framework to QA in other domains with weaker supervision. To address the challenge of retrieval, we propose to build neural network models with explicit memory components that can adaptively reason and learn to retrieve relevant context given a question.",
"title": ""
},
{
"docid": "12f6f7e9350d436cc167e00d72b6e1b1",
"text": "This paper reviews the state of the art of a polyphase complex filter for RF front-end low-IF transceivers applications. We then propose a multi-stage polyphase filter design to generate a quadrature I/Q signal to achieve a wideband precision quadrature phase shift with a constant 90 ° phase difference for self-interference cancellation circuit for full duplex radio. The number of the stages determines the bandwidth requirement of the channel. An increase of 87% in bandwidth is attained when our design is implemented in multi-stage from 2 to an extended 6 stages. A 4-stage polyphase filter achieves 2.3 GHz bandwidth.",
"title": ""
},
{
"docid": "59791087d518577c20708e544a5eec26",
"text": "This paper proposes an innovative fraud detection method, built upon existing fraud detection research and Minority Report, to deal with the data mining problem of skewed data distributions. This method uses backpropagation (BP), together with naive Bayesian (NB) and C4.5 algorithms, on data partitions derived from minority oversampling with replacement. Its originality lies in the use of a single meta-classifier (stacking) to choose the best base classifiers, and then combine these base classifiers' predictions (bagging) to improve cost savings (stacking-bagging). Results from a publicly available automobile insurance fraud detection data set demonstrate that stacking-bagging performs slightly better than the best performing bagged algorithm, C4.5, and its best classifier, C4.5 (2), in terms of cost savings. Stacking-bagging also outperforms the common technique used in industry (BP without both sampling and partitioning). Subsequently, this paper compares the new fraud detection method (meta-learning approach) against C4.5 trained using undersampling, oversampling, and SMOTEing without partitioning (sampling approach). Results show that, given a fixed decision threshold and cost matrix, the partitioning and multiple algorithms approach achieves marginally higher cost savings than varying the entire training data set with different class distributions. The most interesting find is confirming that the combination of classifiers to produce the best cost savings has its contributions from all three algorithms.",
"title": ""
},
{
"docid": "a3dc6a178b7861959b992387366c2c78",
"text": "Linked data and semantic web technologies are gaining impact and importance in the Architecture, Engineering, Construction and Facility Management (AEC/FM) industry. Whereas we have seen a strong technological shift with the emergence of Building Information Modeling (BIM) tools, this second technological shift to the exchange and management of building data over the web might be even stronger than the first one. In order to make this a success, the AEC/FM industry will need strong and appropriate ontologies, as they will allow industry practitioners to structure their data in a commonly agreed format and exchange the data. Herein, we look at the ontologies that are emerging in the area of Building Automation and Control Systems (BACS). We propose a BACS ontology in strong alignment with existing ontologies and evaluate how it can be used for capturing automation and control systems of a building by modeling a use case.",
"title": ""
},
{
"docid": "fe42601df14bf7cae60b7d640004005b",
"text": "Multi-Touch Attribution studies the effects of various types of online advertisements on purchase conversions. It is a very important problem in computational advertising, as it allows marketers to assign credits for conversions to different advertising channels and optimize advertising campaigns. In this paper, we propose an additional multi-touch attribution model (AMTA) based on two obvious assumptions: (1) the effect of an ad exposure is fading with time and (2) the effects of ad exposures on the browsing path of a user are additive. AMTA borrows the techniques from survival analysis and uses the hazard rate to measure the influence of an ad exposure. In addition, we both take the conversion time and the intrinsic conversion rate of users into consideration to generate the probability of a conversion. Experimental results on a large real-world advertising dataset illustrate that the our proposed method is superior to state-of-the-art techniques in conversion rate prediction and the credit allocation based on AMTA is reasonable.",
"title": ""
},
{
"docid": "de05e649c6e77278b69665df3583d3d8",
"text": "This context-aware emotion-based model can help design intelligent agents for group decision making processes. Experiments show that agents with emotional awareness reach agreement more quickly than those without it.",
"title": ""
},
{
"docid": "29e173f7c7d52654d5fe437bc56f6d8c",
"text": "This paper presents a consistent methodology to assess the effect of variable-frequency operation on the dynamics of a flyback converter used in low-cost, high-volume, battery-charger applications. The self-oscillation is typically implemented using peak-current control in boundary conduction mode with a deterministic delay before switching on the next cycle. The modeling is based on the modified state-space-averaging method, where the effect of the varying cycle time and peak-current-mode control is included using special cycle-time and on-time constraints derived from the inductor-current waveforms. The method leads to accurate full-order models, where the effect of the circuit parasitics and the switching delay may also be taken into account. In most of the cases, the effect of parasitics and switching delay is, however, minimal, and therefore, simple small-signal models may be used. The applied switching delay affects the dc operating point in terms of duty ratio and switching frequency, and shall be, therefore, carefully considered.",
"title": ""
},
{
"docid": "61df2a452626b80ce815a0b9528a580b",
"text": "The nice guy stereotype asserts that, although women often say that they wish to date kind, sensitive men, when actually given a choice, women will reject nice men in favor of men with other salient characteristics, such as physical attractiveness. To explore this stereotype, two studies were conducted. In Study 1, 48 college women were randomly assigned into experimental conditions in which they read a script that depicted 2 men competing for a date with a woman. The niceness of 1 target man’s responses was manipulated across conditions. In Study 2, 194 college women were randomly assigned to conditions in which both the target man’s responses and his physical attractiveness were manipulated. Overall results indicated that both niceness and physical attractiveness were positive factors in women’s choices and desirability ratings of the target men. Niceness appeared to be the most salient factor when it came to desirability for more serious relationships, whereas physical attractiveness appeared more important in terms of desirability for more casual, sexual relationships.",
"title": ""
},
{
"docid": "601c873836e93d75eccc0a477b224d99",
"text": "Community detection and influence analysis are significant notions in social networks. We exploit the implicit knowledge of influence-based connectivity and proximity encoded in the network topology, and propose a novel algorithm for both community detection and influence ranking. Using a new influence cascade model, the algorithm generates an influence vector for each node, which captures in detail how the node's influence is distributed through the network. Similarity in this influence space defines a new, meaningful and refined connectivity measure for the closeness of any pair of nodes. Our approach not only differentiates the influence ranking but also effectively finds communities in both undirected and directed networks, and incorporates these two important tasks into one integrated framework. We demonstrate its superior performance with extensive tests on a set of real-world networks and synthetic benchmarks.",
"title": ""
},
{
"docid": "bb2c01181664baaf20012e321b5e1f9f",
"text": "Systems able to suggest items that a user may be interested in are usually named as Recommender Systems. The new emergent field of Recommender Systems has undoubtedly gained much interest in the research community. Although Recommender Systems work well in suggesting books, movies and items of general interest, many users express today a feeling that the existing systems don’t actually identify them as individual personalities. This dissatisfaction turned the research society towards the development of new approaches on Recommender Systems, more user-centric. A methodology originated from Decision Theory is exploited herein, aiming to address to the lack of personalization in Recommender Systems by integrating the user in the recommendation process.",
"title": ""
},
{
"docid": "68a31c4830f71e7e94b90227d69b5a79",
"text": "For many primary storage customers, storage must balance the requirements for large capacity, high performance, and low cost. A well studied technique is to place a solid state drive (SSD) cache in front of hard disk drive (HDD) storage, which can achieve much of the performance benefit of SSDs and the cost per gigabyte efficiency of HDDs. To further lower the cost of SSD caches and increase effective capacity, we propose the addition of data reduction techniques. Our cache architecture, called Nitro, has three main contributions: (1) an SSD cache design with adjustable deduplication, compression, and large replacement units, (2) an evaluation of the trade-offs between data reduction, RAM requirements, SSD writes (reduced up to 53%, which improves lifespan), and storage performance, and (3) acceleration of two prototype storage systems with an increase in IOPS (up to 120%) and reduction of read response time (up to 55%) compared to an SSD cache without Nitro. Additional benefits of Nitro include improved random read performance, faster snapshot restore, and reduced writes to SSDs.",
"title": ""
},
{
"docid": "f392b4ba1cface8be439bf86a3e4c2bd",
"text": "STUDY DESIGN\nCase-control study comparing sagittal plane segmental motion in women (n = 34) with chronic whiplash-associated disorders, Grades I-II, with women (n = 35) with chronic insidious onset neck pain and with a normal database of sagittal plane rotational and translational motion.\n\n\nOBJECTIVE\nTo reveal whether women with chronic whiplash-associated disorders, Grades I-II, demonstrate evidence of abnormal segmental motions in the cervical spine.\n\n\nSUMMARY OF BACKGROUND DATA\nIt is hypothesized that unphysiological spinal motion experienced during an automobile accident may result in a persistent disturbance of segmental motion. It is not known whether patients with chronic whiplash-associated disorders differ from patients with chronic insidious onset neck pain with respect to segmental mobility.\n\n\nMETHODS\nLateral radiographic views were taken in assisted maximal flexion and extension. A new measurement protocol determined rotational and translational motions of segments C3-C4 and C5-C6 with high precision. Segmental motion was compared with normal data as well as among groups.\n\n\nRESULTS\nIn the whiplash-associated disorders group, the C3-C4 and C4-C5 segments showed significantly increased rotational motions. Translational motions within each segment revealed a significant deviation from normal at the C3-C4 segment in the whiplash-associated disorders and insidious onset neck pain groups and at the C5-C6 segment in the whiplash-associated disorders group. Significantly more women in the whiplash-associated disorders group (35.3%) had abnormal increased segmental motions compared to the insidious onset neck pain group (8.6%) when both the rotational and the translational parameters were analyzed. When the translational parameter was analyzed separately, no significant difference was found between groups, or 17.6% (whiplash-associated disorders group) and 8.6% (insidious onset neck pain group), respectively.\n\n\nCONCLUSION\nHypermobility in the lower cervical spine segments in 12 out of 34 patients with chronic whiplash-associated disorders in this study point to injury caused by the accident. This subgroup, identified by the new radiographic protocol, might need a specific therapeutic intervention.",
"title": ""
},
{
"docid": "9904ac77b96bdd634322701a53149b4e",
"text": "Brain-computer interface can have a profound impact on the life of paralyzed or elderly citizens as they offer control over various devices without any necessity of movement of the body parts. This technology has come a long way and opened new dimensions in improving our life. Use of electroencephalogram (EEG wave) based control schemes can change the shape of the lives of the disabled citizens if incorporated with an electric wheelchair through a wearable device. Electric wheelchairs are nowadays commercially available which provides mobility to the disabled persons with relative ease. But most of the commercially available products are much expensive and controlled through the joystick, hand gesture, voice command, etc. which may not be viable control scheme for severely disabled or paralyzed persons. In our research work, we have developed a low-cost electric wheelchair using locally available cheap parts and incorporated brain-computer interface considering the affordability of people from developing countries. So, people who have lost their control over their limbs or have the inability to drive a wheelchair by any means can control the proposed wheelchair only by their attention and willingness to blink. To acquire the signal of attention and blink, single channel electroencephalogram (EEG wave) was captured by a wearable Neurosky MindWave Mobile. One of the salient features of the proposed scheme is ‘Destination Mapping’ by which the wheelchair develops a virtual map as the user moves around and autonomously reaches desired positions afterward by taking command from a smart interface based on EEG signal. From the experiments that were carried out at different stages of the development, it was exposed that, such a wheelchair is easy to train and calibrate for different users and offers a low cost and smart alternative especially for the elderly people in developing countries.",
"title": ""
},
{
"docid": "565c949a2bf8b6f6c3d246c7c195419d",
"text": "Extracorporeal photochemotherapy (ECP) is an effective treatment modality for patients with erythrodermic myocosis fungoides (MF) and Sezary syndrome (SS). During ECP, a fraction of peripheral blood mononuclear cells is collected, incubated ex-vivo with methoxypsoralen, UVA irradiated, and finally reinfused to the patient. Although the mechanism of action of ECP is not well established, clinical and laboratory observations support the hypothesis of a vaccination-like effect. ECP induces apoptosis of normal and neoplastic lymphocytes, while enhancing differentiation of monocytes towards immature dendritic cells (imDCs), followed by engulfment of apoptotic bodies. After reinfusion, imDCs undergo maturation and antigenic peptides from the neoplastic cells are expressed on the surface of DCs. Mature DCs travel to lymph nodes and activate cytotoxic T-cell clones with specificity against tumor antigens. Disease control is mediated through cytotoxic T-lymphocytes with tumor specificity. The efficacy and excellent safety profile of ECP has been shown in a large number of retrospective trials. Previous studies showed that monotherapy with ECP produces an overall response rate of approximately 60%, while clinical data support that ECP is much more effective when combined with other immune modulating agents such as interferons or retinoids, or when used as consolidation treatment after total skin electron beam irradiation. However, only a proportion of patients actually respond to ECP and parameters predictive of response need to be discovered. A patient with a high probability of response to ECP must fulfill all of the following criteria: (1) SS or erythrodermic MF, (2) presence of neoplastic cells in peripheral blood, and (3) early disease onset. Despite the fact that ECP has been established as a standard treatment modality, no prospective randomized study has been conducted so far, to the authors' knowledge. Considering the high cost of the procedure, the role of ECP in the treatment of SS/MF needs to be clarified via well designed multicenter prospective randomized trials.",
"title": ""
}
] |
scidocsrr
|
6497cb6d9d4b6acb09b47a740d972647
|
Sustaining Superior Performance in Business Ecosystems: Evidence from Application Software Developers in the iOS and Android Smartphone Ecosystems
|
[
{
"docid": "4a87e61106125ffdd49c42517ce78b87",
"text": "Due to network effects and switching costs, platform providers often become entrenched. To dislodge them, entrants generally must offer revolutionary products. We explore a second path to platform leadership change that does not rely on Schumpeterian creative destruction: platform envelopment. By leveraging common components and shared user relationships, one platform provider can move into another’s market, combining its own functionality with the target’s in a multi-platform bundle. Dominant firms otherwise sheltered from entry by standalone rivals may be vulnerable to an adjacent platform provider’s envelopment attack. We analyze conditions under which envelopment strategies are likely to succeed.",
"title": ""
}
] |
[
{
"docid": "7c4822a90e594a27ddb9d6dd3e6aeb38",
"text": "It is shown that if there are P noncoincident input patterns to learn and a two-layered feedforward neural network having P-1 sigmoidal hidden neuron and one dummy hidden neuron is used for the learning, then any suboptimal equilibrium point of the corresponding error surface is unstable in the sense of Lyapunov. This result leads to a sufficient local minima free condition for the backpropagation learning.",
"title": ""
},
{
"docid": "e84e27610c27b5880977aca20d04dba3",
"text": "Automatic bug fixing has become a promising direction for reducing manual effort in debugging. However, general approaches to automatic bug fixing may face some fundamental difficulties. In this paper, we argue that automatic fixing of specific types of bugs can be a useful complement.\n This paper reports our first attempt towards automatically fixing memory leaks in C programs. Our approach generates only safe fixes, which are guaranteed not to interrupt normal execution of the program. To design such an approach, we have to deal with several challenging problems such as inter-procedural leaks, global variables, loops, and leaks from multiple allocations. We propose solutions to all the problems and integrate the solutions into a coherent approach.\n We implemented our inter-procedural memory leak fixing into a tool named LeakFix and evaluated LeakFix on 15 programs with 522k lines of code. Our evaluation shows that LeakFix is able to successfully fix a substantial number of memory leaks, and LeakFix is scalable for large applications.",
"title": ""
},
{
"docid": "da5c937980e0319b236bc75912643534",
"text": "Digital information has become a social infrastructure and with the expansion of the Internet, network infrastructure has become an indispensable part of social life and industrial activity for mankind. For various reasons, however, today's networks are vulnerable to numerous risks, such as information leakage, privacy infringement and data corruption. Through this research, the authors tried to establish an in-depth understanding of the importance of anonymous communication in social networking which is mostly used by ordinary and non-technical people. It demonstrates how the commonly used non-anonymous communication scheme in social networking can turn the Internet into a very dangerous platform because of its built-in nature making its users' identity easily traceable. After providing some introductory information on internet protocol (IP), internal working mechanism of social networking and concept of anonymity on the Internet, Facebook is used as a case study in demonstrating how various network tracing tools and gimmicks could be used to reveal identity of its users and victimize many innocent people. It then demonstrates working mechanism of various tools that can turn the Facebook social networking site into a safe and anonymous platform. The paper concludes by summarizing pros and cons of various anonymous communication techniques and highlighting its importance for social networking platforms.",
"title": ""
},
{
"docid": "63a481d452c6f566d88fdb9fa9d21703",
"text": "Compressive sensing (CS) is a novel sampling paradigm that samples signals in a much more efficient way than the established Nyquist sampling theorem. CS has recently gained a lot of attention due to its exploitation of signal sparsity. Sparsity, an inherent characteristic of many natural signals, enables the signal to be stored in few samples and subsequently be recovered accurately, courtesy of CS. This article gives a brief background on the origins of this idea, reviews the basic mathematical foundation of the theory and then goes on to highlight different areas of its application with a major emphasis on communications and network domain. Finally, the survey concludes by identifying new areas of research where CS could be beneficial.",
"title": ""
},
{
"docid": "142b1f178ade5b7ff554eae9cad27f69",
"text": "It is often desirable to be able to recognize when inputs to a recognition function learned in a supervised manner correspond to classes unseen at training time. With this ability, new class labels could be assigned to these inputs by a human operator, allowing them to be incorporated into the recognition function—ideally under an efficient incremental update mechanism. While good algorithms that assume inputs from a fixed set of classes exist, e.g. , artificial neural networks and kernel machines, it is not immediately obvious how to extend them to perform incremental learning in the presence of unknown query classes. Existing algorithms take little to no distributional information into account when learning recognition functions and lack a strong theoretical foundation. We address this gap by formulating a novel, theoretically sound classifier—the Extreme Value Machine (EVM). The EVM has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning. Compared to other classifiers in the same deep network derived feature space, the EVM is accurate and efficient on an established benchmark partition of the ImageNet dataset.",
"title": ""
},
{
"docid": "ff947ccb7efdd5517f9b60f9c11ade6a",
"text": "Several messages express opinions about events, products, and services, political views or even their author's emotional state and mood. Sentiment analysis has been used in several applications including analysis of the repercussions of events in social networks, analysis of opinions about products and services, and simply to better understand aspects of social communication in Online Social Networks (OSNs). There are multiple methods for measuring sentiments, including lexical-based approaches and supervised machine learning methods. Despite the wide use and popularity of some methods, it is unclear which method is better for identifying the polarity (i.e., positive or negative) of a message as the current literature does not provide a method of comparison among existing methods. Such a comparison is crucial for understanding the potential limitations, advantages, and disadvantages of popular methods in analyzing the content of OSNs messages. Our study aims at filling this gap by presenting comparisons of eight popular sentiment analysis methods in terms of coverage (i.e., the fraction of messages whose sentiment is identified) and agreement (i.e., the fraction of identified sentiments that are in tune with ground truth). We develop a new method that combines existing approaches, providing the best coverage results and competitive agreement. We also present a free Web service called iFeel, which provides an open API for accessing and comparing results across different sentiment methods for a given text.",
"title": ""
},
{
"docid": "7835bb8463eff6a7fbeec256068e1f09",
"text": "Efforts to incorporate intelligence into the user interface have been underway for decades, but the commercial impact of this work has not lived up to early expectations, and is not immediately apparent. This situation appears to be changing. However, so far the most interesting intelligent user interfaces (IUIS) have tended to use minimal or simplistic AI. In this panel we consider whether more or less AI is the key to the development of compelling IUIS. The panelists will present examples of compelling IUIS that use a selection of AI techniques, mostly simple, but some complex. Each panelist will then comment on the merits of different kinds and quantities of AI in the development of pragmatic interface technology.",
"title": ""
},
{
"docid": "b540fb20a265d315503543a5d752f486",
"text": "Deep convolutional networks have witnessed unprecedented success in various machine learning applications. Formal understanding on what makes these networks so successful is gradually unfolding, but for the most part there are still significant mysteries to unravel. The inductive bias, which reflects prior knowledge embedded in the network architecture, is one of them. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning. We use this connection for asserting novel theoretical observations regarding the role that the number of channels in each layer of the convolutional network fulfills in the overall inductive bias. Specifically, we show an equivalence between the function realized by a deep convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which relies on their common underlying tensorial structure. This facilitates the use of quantum entanglement measures as welldefined quantifiers of a deep network’s expressive ability to model intricate correlation structures of its inputs. Most importantly, the construction of a deep convolutional arithmetic circuit in terms of a Tensor Network is made available. This description enables us to carry a graph-theoretic analysis of a convolutional network, tying its expressiveness to a min-cut in the graph which characterizes it. Thus, we demonstrate a direct control over the inductive bias of the designed deep convolutional network via its channel numbers, which we show to be related to this min-cut in the underlying graph. This result is relevant to any practitioner designing a convolutional network for a specific task. We theoretically analyze convolutional arithmetic circuits, and empirically validate our findings on more common convolutional networks which involve ReLU activations and max pooling. Beyond the results described above, the description of a deep convolutional network in well-defined graph-theoretic tools and the formal structural connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work.",
"title": ""
},
{
"docid": "9c79105367f92ee1d6ac604af2105bf2",
"text": "Vector controlled motor drives are widely used in industry application areas, usually they contain two current sensors and a speed sensor. A fault diagnosis and reconfiguration structure is proposed in this paper including current sensor measurement errors and sensors open-circuit fault. Sliding windows and special features are designed to real-time detect the measurement errors, compensations are made according to detected offset and scaling values. When open-circuit faults occur, sensor outputs are constant-zero, the residuals between the Extended Kalman Filter (EKF) outputs and the sensors outputs are larger than pre-defined close-to-zero thresholds, under healthy condition, the residuals are equal to zero, as a result, the residuals can be used for open circuit fault detection. In this situation, the feedback signals immediately switch to EKF outputs to realize reconfiguration. Fair robustness are evaluated under disturbance such as load torque changes and variable speed. Simulation results show the effectiveness and merits of the proposed methods in this paper.",
"title": ""
},
{
"docid": "c53f8e3d8ca800284ce22748d7afde59",
"text": "With the expansion of software scale, effective approaches for automatic vulnerability mining have been in badly needed. This paper presents a novel approach which can generate test cases of high pertinence and reachability. Unlike standard fuzzing techniques which explore the test space blindly, our approach utilizes abstract interpretation based on intervals to locate the Frail-Points of program which may cause buffer over-flow in some special conditions and the technique of static taint trace to build mappings between the Frail-Points and program inputs. Moreover, acquire path constraints of each Frail-Point through symbolic execution. Finally, combine information of mappings and path constraints to propose a policy for guiding test case generation.",
"title": ""
},
{
"docid": "27f7025c2ee602b5ad2dee830836bbef",
"text": "Arsenic contamination of rice is widespread, but the rhizosphere processes influencing arsenic attenuation remain unresolved. In particular, the formation of Fe plaque around rice roots is thought to be an important barrier to As uptake, but the relative importance of this mechanism is not well characterized. Here we elucidate the colocalization of As species and Fe on rice roots with variable Fe coatings; we used a combination of techniques--X-ray fluorescence imaging, μXANES, transmission X-ray microscopy, and tomography--for this purpose. Two dominant As species were observed in fine roots-inorganic As(V) and As(III) -with minor amounts of dimethylarsinic acid (DMA) and arsenic trisglutathione (AsGlu(3)). Our investigation shows that variable Fe plaque formation affects As entry into rice roots. In roots with Fe plaque, As and Fe were strongly colocated around the root; however, maximal As and Fe were dissociated and did not encapsulate roots that had minimal Fe plaque. Moreover, As was not exclusively associated with Fe plaque in the rice root system; Fe plaque does not coat many of the young roots or the younger portion of mature roots. Young, fine roots, important for solute uptake, have little to no iron plaque. Thus, Fe plaque does not directly intercept (and hence restrict) As supply to and uptake by rice roots but rather serves as a bulk scavenger of As predominantly near the root base.",
"title": ""
},
{
"docid": "abea5fcab86877f1d085183a714bc37d",
"text": "In this work, we introduce the challenging problem of joint multi-person pose estimation and tracking of an unknown number of persons in unconstrained videos. Existing methods for multi-person pose estimation in images cannot be applied directly to this problem, since it also requires to solve the problem of person association over time in addition to the pose estimation for each person. We therefore propose a novel method that jointly models multi-person pose estimation and tracking in a single formulation. To this end, we represent body joint detections in a video by a spatio-temporal graph and solve an integer linear program to partition the graph into sub-graphs that correspond to plausible body pose trajectories for each person. The proposed approach implicitly handles occlusion and truncation of persons. Since the problem has not been addressed quantitatively in the literature, we introduce a challenging Multi-Person PoseTrack dataset, and also propose a completely unconstrained evaluation protocol that does not make any assumptions about the scale, size, location or the number of persons. Finally, we evaluate the proposed approach and several baseline methods on our new dataset.",
"title": ""
},
{
"docid": "de1fe89adbc6e4a8993eb90cae39d97e",
"text": "Decision trees have proved to be valuable tools for the description, classification and generalization of data. Work on constructing decision trees from data exists in multiple disciplines such as statistics, pattern recognition, decision theory, signal processing, machine learning and artificial neural networks. Researchers in these disciplines, sometimes working on quite different problems, identified similar issues and heuristics for decision tree construction. This paper surveys existing work on decision tree construction, attempting to identify the important issues involved, directions the work has taken and the current state of the art.",
"title": ""
},
{
"docid": "060554d306615e0d99271a2a43eda87a",
"text": "The most famous lattice problem is the Shortest Vector Problem (SVP), which has many applications in cryptology. The best approximation algorithms known for SVP in high dimension rely on a subroutine for exact SVP in low dimension. In this paper, we assess the practicality of the best (theoretical) algorithm known for exact SVP in low dimension: the sieve algorithm proposed by Ajtai, Kumar and Sivakumar (AKS) in 2001. AKS is a randomized algorithm of time and space complexity 2, which is theoretically much lower than the super-exponential complexity of all alternative SVP algorithms. Surprisingly, no implementation and no practical analysis of AKS has ever been reported. It was in fact widely believed that AKS was impractical: for instance, Schnorr claimed in 2003 that the constant hidden in the 2 complexity was at least 30. In this paper, we show that AKS can actually be made practical: we present a heuristic variant of AKS whose running time is (4/3+ε) polynomial-time operations, and whose space requirement is (4/3+ε)n/2 polynomially many bits. Our implementation can experimentally find shortest lattice vectors up to dimension 50, but is slower than classical alternative SVP algorithms in these dimensions.",
"title": ""
},
{
"docid": "c8a0276919005f36a587d7d209063e2f",
"text": "Praveen Prakash1, Kuttapa Nishanth2, Nikul Jasani1, Aneesh Katyal1, US Krishna Nayak3 1Post Graduate Student, Department of Orthodontics & Dentofacial Orthopaedics, A.B. Shetty Memorial Institute of Dental Sciences, Mangalore, Karnataka, India, 2Professor, Department of Orthodontics & Dentofacial Orthopaedics, A.B. Shetty Memorial Institute of Dental Sciences, Mangalore, Karnataka, India, 3Dean Academics, Head of Department, Department of Orthodontics & Dentofacial Orthopaedics, A.B. Shetty Memorial Institute of Dental Sciences, Mangalore, Karnataka, India",
"title": ""
},
{
"docid": "6d3dc0ea95cd8626aff6938748b58d1a",
"text": "Mango cultivation methods being adopted currently are ineffective and low productive despite consuming huge man power. Advancements in robust unmanned aerial vehicles (UAV's), high speed image processing algorithms and machine vision techniques, reinforce the possibility of transforming agricultural scenario to modernity within prevailing time and energy constraints. Present paper introduces Agricultural Aid for Mango cutting (AAM), an Agribot that could be employed for precision mango farming. It is a quadcopter empowered with vision and cutter systems complemented with necessary ancillaries. It could hover around the trees, detect the ripe mangoes, cut and collect them. Paper also sheds light on the available Agribots that have mostly been limited to the research labs. AAM robot is the first of its kind that once implemented could pave way to the next generation Agribots capable of increasing the agricultural productivity and justify the existence of intelligent machines.",
"title": ""
},
{
"docid": "a1f05b8954434a782f9be3d9cd10bb8b",
"text": "Because of their avid use of new media and their increased spending power, children and teens have become primary targets of a new \"media and marketing ecosystem.\" The digital marketplace is undergoing rapid innovation as new technologies and software applications continue to reshape the media landscape and user behaviors. The advertising industry, in many instances led by food and beverage marketers, is purposefully exploiting the special relationship that youth have with new media, as online marketing campaigns create unprecedented intimacies between adolescents and the brands and products that now literally surround them.",
"title": ""
},
{
"docid": "78539b627037a491dade4a1e8abdaa0b",
"text": "Scholarly citations from one publication to another, expressed as reference lists within academic articles, are core elements of scholarly communication. Unfortunately, they usually can be accessed en masse only by paying significant subscription fees to commercial organizations, while those few services that do made them available for free impose strict limitations on their reuse. In this paper we provide an overview of the OpenCitations Project (http://opencitations.net) undertaken to remedy this situation, and of its main product, the OpenCitations Corpus, which is an open repository of accurate bibliographic citation data harvested from the scholarly literature, made available in RDF under a Creative Commons public domain dedication. RASH version: https://w3id.org/oc/paper/occ-lisc2016.html",
"title": ""
},
{
"docid": "4248ea350416596301e551dd48334770",
"text": "The era of big data has led to the emergence of new systems for real-time distributed stream processing, e.g., Apache Storm is one of the most popular stream processing systems in industry today. However, Storm, like many other stream processing systems lacks an intelligent scheduling mechanism. The default round-robin scheduling currently deployed in Storm disregards resource demands and availability, and can therefore be inefficient at times. We present R-Storm (Resource-Aware Storm), a system that implements resource-aware scheduling within Storm. R-Storm is designed to increase overall throughput by maximizing resource utilization while minimizing network latency. When scheduling tasks, R-Storm can satisfy both soft and hard resource constraints as well as minimizing network distance between components that communicate with each other. We evaluate R-Storm on set of micro-benchmark Storm applications as well as Storm applications used in production at Yahoo! Inc. From our experimental results we conclude that R-Storm achieves 30-47% higher throughput and 69-350% better CPU utilization than default Storm for the micro-benchmarks. For the Yahoo! Storm applications, R-Storm outperforms default Storm by around 50% based on overall throughput. We also demonstrate that R-Storm performs much better when scheduling multiple Storm applications than default Storm.",
"title": ""
},
{
"docid": "19a28d8bbb1f09c56f5c85be003a9586",
"text": "ABSTRACT: Five questionnaires for assessing the usability of a website were compared in a study with 123 participants. The questionnaires studied were SUS, QUIS, CSUQ, a variant of Microsoft’s Product Reaction Cards, and one that we have used in our Usability Lab for several years. Each participant performed two tasks on each of two websites: finance.yahoo.com and kiplinger.com. All five questionnaires revealed that one site was significantly preferred over the other. The data were analyzed to determine what the results would have been at different sample sizes from 6 to 14. At a sample size of 6, only 30-40% of the samples would have identified that one of the sites was significantly preferred. Most of the data reach an apparent asymptote at a sample size of 12, where two of the questionnaires (SUS and CSUQ) yielded the same conclusion as the full dataset at least 90% of the time.",
"title": ""
}
] |
scidocsrr
|
b1b8a5a657a00bb918bbc7a21dfe0117
|
Reconfigurable Path Planning for an Autonomous Unmanned Aerial Vehicle
|
[
{
"docid": "7bb6e6b296c918893f572a63082ab353",
"text": "This paper presents a novel randomized motion planner for robots that must achieve a specified goal under kinematic and/or dynamic motion constraints while avoiding collision with moving obstacles with known trajectories. The planner encodes the motion constraints on the robot with a control system and samples the robot’s state time space by picking control inputs at random and integrating its equations of motion. The result is a probabilistic roadmap of sampled state time points, called milestones, connected by short admissible trajectories. The planner does not precompute the roadmap; instead, for each planning query, it generates a new roadmap to connect an initial and a goal state time point. The paper presents a detailed analysis of the planner’s convergence rate. It shows that, if the state time space satisfies a geometric property called expansiveness, then a slightly idealized version of our implemented planner is guaranteed to find a trajectory when one exists, with probability quickly converging to 1, as the number of of milestones increases. Our planner was tested extensively not only in simulated environments, but also on a real robot. In the latter case, a vision module estimates obstacle motions just before planning starts. The planner is then allocated a small, fixed amount of time to compute a trajectory. If a change in the expected motion of the obstacles is detected while the robot executes the planned trajectory, the planner recomputes a trajectory on the fly. Experiments on the real robot led to several extensions of the planner in order to deal with time delays and uncertainties that are inherent to an integrated robotic system interacting with the physical world.",
"title": ""
}
] |
[
{
"docid": "f3d234211bd93c9f61faf92d12919b27",
"text": "BACKGROUND\nPeanut allergy is a major public health problem that affects 1% of the population and has no effective therapy.\n\n\nOBJECTIVE\nTo examine the safety and efficacy of oral desensitization in peanut-allergic children in combination with a brief course of anti-IgE mAb (omalizumab [Xolair]).\n\n\nMETHODS\nWe performed oral peanut desensitization in peanut-allergic children at high risk for developing significant peanut-induced allergic reactions. Omalizumab was administered before and during oral peanut desensitization.\n\n\nRESULTS\nWe enrolled 13 children (median age, 10 years), with a median peanut-specific IgE level of 229 kU(A)/L and a median total serum IgE level of 621 kU/L, who failed an initial double-blind placebo-controlled food challenge at peanut flour doses of 100 mg or less. After pretreatment with omalizumab, all 13 subjects tolerated the initial 11 desensitization doses given on the first day, including the maximum dose of 500 mg peanut flour (cumulative dose, 992 mg, equivalent to >2 peanuts), requiring minimal or no rescue therapy. Twelve subjects then reached the maximum maintenance dose of 4000 mg peanut flour per day in a median time of 8 weeks, at which point omalizumab was discontinued. All 12 subjects continued on 4000 mg peanut flour per day and subsequently tolerated a challenge with 8000 mg peanut flour (equivalent to about 20 peanuts), or 160 to 400 times the dose tolerated before desensitization. During the study, 6 of the 13 subjects experienced mild or no allergic reactions, 5 subjects had grade 2 reactions, and 2 subjects had grade 3 reactions, all of which responded rapidly to treatment.\n\n\nCONCLUSIONS\nAmong children with high-risk peanut allergy, treatment with omalizumab may facilitate rapid oral desensitization and qualitatively improve the desensitization process.",
"title": ""
},
{
"docid": "2793ce9ebdf3b3d90c5d005f01267cef",
"text": "We present an algorithm, HI-MAT (Hierarchy Induction via Models And Trajectories), that discovers MAXQ task hierarchies by applying dynamic Bayesian network models to a successful trajectory from a source reinforcement learning task. HI-MAT discovers subtasks by analyzing the causal and temporal relationships among the actions in the trajectory. Under appropriate assumptions, HI-MAT induces hierarchies that are consistent with the observed trajectory and have compact value-function tables employing safe state abstractions. We demonstrate empirically that HI-MAT constructs compact hierarchies that are comparable to manually-engineered hierarchies and facilitate significant speedup in learning when transferred to a target task.",
"title": ""
},
{
"docid": "8f0da69d48c3d5098018b2e5046b6e8e",
"text": "Halogenated aliphatic compounds have many technical uses, but substances within this group are also ubiquitous environmental pollutants that can affect the ozone layer and contribute to global warming. The establishment of quantitative structure-property relationships is of interest not only to fill in gaps in the available database but also to validate experimental data already acquired. The three-dimensional structures of 240 compounds were modeled with molecular mechanics prior to the generation of empirical descriptors. Two bilinear projection methods, principal component analysis (PCA) and partial-least-squares regression (PLSR), were used to identify outliers. PLSR was subsequently used to build a multivariate calibration model by extracting the latent variables that describe most of the covariation between the molecular structure and the boiling point. Boiling points were also estimated with an extension of the group contribution method of Stein and Brown.",
"title": ""
},
{
"docid": "518d8e621e1239a94f50be3d5e2982f9",
"text": "With a number of emerging biometric applications there is a dire need of less expensive authentication technique which can authenticate even if the input image is of low resolution and low quality. Foot biometric has both the physiological and behavioral characteristics still it is an abandoned field. The reason behind this is, it involves removal of shoes and socks while capturing the image and also dirty feet makes the image noisy. Cracked heels is also a reason behind noisy images. Physiological and behavioral biometric characteristics makes it a great alternative to computational intensive algorithms like fingerprint, palm print, retina or iris scan [1] and face. On one hand foot biometric has minutia features which is considered totally unique. The uniqueness of minutiae feature is already tested in fingerprint analysis [2]. On the other hand it has geometric features like hand geometry which also give satisfactory results in recognition. We can easily apply foot biometrics at those places where people inherently remove their shoes, like at holy places such as temples and mosque people remove their shoes before entering from the perspective of faith, and also remove shoes at famous monuments such as The Taj Mahal, India from the perspective of cleanliness and preservation. Usually these are the places with a strong foot fall and high risk security due to chaotic crowd. Most of the robbery, theft, terrorist attacks, are happening at these places. One very fine example is Akshardham attack in September 2002. Hence we can secure these places using low cost security algorithms based on footprint recognition.",
"title": ""
},
{
"docid": "be7e30d4ebae196b9cdde7b5d6f79951",
"text": "This paper introduces a new quadrotor manipulation system that consists of a 2-link manipulator attached to the bottom of a quadrotor. This new system presents a solution for the drawbacks found in the current quadrotor manipulation system which uses a gripper fixed to a quadrotor. Unlike the current system, the proposed system enables the end-effector to achieve any arbitrary orientation and thus increases its degrees of freedom from 4 to 6. Also, it provides enough distance between the quadrotor and the object to be manipulated. This is useful in some applications such as demining applications. System kinematics and dynamics are derived which are highly nonlinear. Controller is designed based on feedback linearization to track desired trajectories. Controlling the movements in the horizontal directions is simplified by utilizing the derived nonholonmic constraints. Finally, the proposed system is simulated using MATLAB/SIMULINK program. The simulation results show the effectiveness of the proposed controller.",
"title": ""
},
{
"docid": "c0df91240263d17411c4d4bb311bc19a",
"text": "AIMS\nLarge randomized trials have shown that beta-blockers reduce mortality and hospital admissions in patients with heart failure. The effects of beta-blockers in elderly patients with a broad range of left ventricular ejection fraction are uncertain. The SENIORS study was performed to assess effects of the beta-blocker, nebivolol, in patients >/=70 years, regardless of ejection fraction.\n\n\nMETHODS AND RESULTS\nWe randomly assigned 2128 patients aged >/=70 years with a history of heart failure (hospital admission for heart failure within the previous year or known ejection fraction </=35%), 1067 to nebivolol (titrated from 1.25 mg once daily to 10 mg once daily), and 1061 to placebo. The primary outcome was a composite of all cause mortality or cardiovascular hospital admission (time to first event). Analysis was by intention to treat. Mean duration of follow-up was 21 months. Mean age was 76 years (SD 4.7), 37% were female, mean ejection fraction was 36% (with 35% having ejection fraction >35%), and 68% had a prior history of coronary heart disease. The mean maintenance dose of nebivolol was 7.7 mg and of placebo 8.5 mg. The primary outcome occurred in 332 patients (31.1%) on nebivolol compared with 375 (35.3%) on placebo [hazard ratio (HR) 0.86, 95% CI 0.74-0.99; P=0.039]. There was no significant influence of age, gender, or ejection fraction on the effect of nebivolol on the primary outcome. Death (all causes) occurred in 169 (15.8%) on nebivolol and 192 (18.1%) on placebo (HR 0.88, 95% CI 0.71-1.08; P=0.21).\n\n\nCONCLUSION\nNebivolol, a beta-blocker with vasodilating properties, is an effective and well-tolerated treatment for heart failure in the elderly.",
"title": ""
},
{
"docid": "9244acef01812d757639bd4f09631c22",
"text": "This paper describes the results of the first shared task on Multilingual Emoji Prediction, organized as part of SemEval 2018. Given the text of a tweet, the task consists of predicting the most likely emoji to be used along such tweet. Two subtasks were proposed, one for English and one for Spanish, and participants were allowed to submit a system run to one or both subtasks. In total, 49 teams participated in the English subtask and 22 teams submitted a system run to the Spanish subtask. Evaluation was carried out emoji-wise, and the final ranking was based on macro F-Score. Data and further information about this task can be found at https://competitions. codalab.org/competitions/17344.",
"title": ""
},
{
"docid": "cb9a54b8eeb6ca14bdbdf8ee3faa8bdb",
"text": "The problem of auto-focusing has been studied for long, but most techniques found in literature do not always work well for low-contrast images. In this paper, a robust focus measure based on the energy of the image is proposed. It performs equally well on ordinary and low-contrast images. In addition, it is computationally efficient.",
"title": ""
},
{
"docid": "fd51405c809d617663d1520921645529",
"text": "As the conversions between the sensing element and interface in the feedback loop and forward path are nonlinear, harmonic distortions appear in the output spectrum, which will decrease the signal-to-noise and distortion ratio. Nonlinear distortions are critical for a high-resolution electromechanical sigma-delta (ΣΔ) modulator. However, there exists no detailed analysis approach to derive harmonic distortion results in the output signal for electromechanical ΣΔ modulators. In this paper, we employ a nonlinear op-amp dc gain model to derive the nonlinear displacement to voltage conversion in the forward path, and the nonlinear electrostatic feedback force on the proof mass is also computed. Based on a linear approximation of the modulator in the back end, the harmonic distortion model in the output spectrum of the proposed fifth-order electromechanical ΣΔ modulator is derived as a function of system parameters. The proposed nonlinear distortion models are verified by simulation results and experimental results.",
"title": ""
},
{
"docid": "06f421d0f63b9dc08777c573840654d5",
"text": "This paper presents the implementation of a modified state observer-based adaptive dynamic inverse controller for the Black Kite micro aerial vehicle. The pitch and velocity adaptations are computed by the modified state observer in the presence of turbulence to simulate atmospheric conditions. This state observer uses the estimation error to generate the adaptations and, hence, is more robust than model reference adaptive controllers which use modeling or tracking error. In prior work, a traditional proportional-integral-derivative control law was tested in simulation for its adaptive capability in the longitudinal dynamics of the Black Kite micro aerial vehicle. This controller tracks the altitude and velocity commands during normal conditions, but fails in the presence of both parameter uncertainties and system failures. The modified state observer-based adaptations, along with the proportional-integral-derivative controller enables tracking despite these conditions. To simulate flight of the micro aerial vehicle with turbulence, a Dryden turbulence model is included. The turbulence levels used are based on the absolute load factor experienced by the aircraft. The length scale was set to 2.0 meters with a turbulence intensity of 5.0 m/s that generates a moderate turbulence. Simulation results for various flight conditions show that the modified state observer-based adaptations were able to adapt to the uncertainties and the controller tracks the commanded altitude and velocity. The summary of results for all of the simulated test cases and the response plots of various states for typical flight cases are presented.",
"title": ""
},
{
"docid": "8856fa1c0650970da31fae67cd8dcd86",
"text": "In this paper, a new topology for rectangular waveguide bandpass and low-pass filters is presented. A simple, accurate, and robust design technique for these novel meandered waveguide filters is provided. The proposed filters employ a concatenation of ±90° $E$ -plane mitered bends (±90° EMBs) with different heights and lengths, whose dimensions are consecutively and independently calculated. Each ±90° EMB satisfies a local target reflection coefficient along the device so that they can be calculated separately. The novel structures allow drastically reduce the total length of the filters and embed bends if desired, or even to provide routing capabilities. Furthermore, the new meandered topology allows the introduction of transmission zeros above the passband of the low-pass filter, which can be controlled by the free parameters of the ±90° EMBs. A bandpass and a low-pass filter with meandered topology have been designed following the proposed novel technique. Measurements of the manufactured prototypes are also included to validate the novel topology and design technique, achieving excellent agreement with the simulation results.",
"title": ""
},
{
"docid": "1dbb04e806b1fd2a8be99633807d9f4d",
"text": "Realistically animated fluids can add substantial realism to interactive applications such as virtual surgery simulators or computer games. In this paper we propose an interactive method based on Smoothed Particle Hydrodynamics (SPH) to simulate fluids with free surfaces. The method is an extension of the SPH-based technique by Desbrun to animate highly deformable bodies. We gear the method towards fluid simulation by deriving the force density fields directly from the Navier-Stokes equation and by adding a term to model surface tension effects. In contrast to Eulerian grid-based approaches, the particle-based approach makes mass conservation equations and convection terms dispensable which reduces the complexity of the simulation. In addition, the particles can directly be used to render the surface of the fluid. We propose methods to track and visualize the free surface using point splatting and marching cubes-based surface reconstruction. Our animation method is fast enough to be used in interactive systems and to allow for user interaction with models consisting of up to 5000 particles.",
"title": ""
},
{
"docid": "a68872f1835e1c477d04335ccce99862",
"text": "An industrial robot today uses measurements of its joint positions and models of its kinematics and dynamics to estimate and control its end-effector position. Substantially better end-effector position estimation and control performance would be obtainable if direct measurements of its end-effector position were also used. The subject of this paper is extended Kalman filtering for precise estimation of the position of the end-effector of a robot using, in addition to the usual measurements of the joint positions, direct measurements of the end-effector position. The estimation performances of extended Kalman filters are compared in applications to a planar two-axis robotic arm with very flexible links. The comparisons shed new light on the dependence of extended Kalman filter estimation performance on the quality of the model of the arm dynamics that the extended Kalman filter operates with. KEY WORDS—extended Kalman filter, estimation, flexible links, robot",
"title": ""
},
{
"docid": "515fac2b02637ddee5e69a8a22d0e309",
"text": "The continuous expansion of the multilingual information society has led in recent years to a pressing demand for multilingual linguistic resources suitable to be used for different applications. In this paper we present the WordNet Domains Hierarchy (WDH), a language-independent resource composed of 164, hierarchically organized, domain labels (e.g. Architecture, Sport, Medicine). Although WDH has been successfully applied to various Natural Language Processing tasks, the first available version presented some problems, mostly related to the lack of a clear semantics of the domain labels. Other correlated issues were the coverage and the balancing of the domains. We illustrate a new version of WDH addressing these problems by an explicit and systematic reference to the Dewey Decimal Classification. The new version of WDH has a better defined semantics and is applicable to a wider range of tasks.",
"title": ""
},
{
"docid": "976f97f5b64080cf48da206fef3acb27",
"text": "One of the primary architectural principles behind the Internet is the use of distributed protocols, which facilitates fault tolerance and distributed management. Unfortunately, having nodes (i.e., switches and routers) perform control decisions independently makes it difficult to control the network or even understand or debug its overall emergent behavior. As a result, networks are often inefficient, unstable, and fragile. This Internet architecture also poses a significant, often insurmountable, challenge to the deployment of new protocols and evolution of existing ones. Software defined networking (SDN) is a recent networking architecture with promising properties relative to these weaknesses in traditional networks. SDN decouples the control plane, which makes the network forwarding decisions, from the data plane, which mainly forwards the data. This decoupling enables more centralized control where coordinated decisions directly guide the network to desired operating conditions. Moreover, decoupling the control enables graceful evolution of protocols, and the deployment of new protocols without having to replace the data plane switches. In this survey, we review recent work that leverages SDN in wireless network settings, where they are not currently widely adopted or well understood. More specifically, we evaluate the use of SDN in four classes of popular wireless networks: cellular, sensor, mesh, and home networks. We classify the different advantages that can be obtained by using SDN across this range of networks, and hope that this classification identifies unexplored opportunities for using SDN to improve the operation and performance of wireless networks.",
"title": ""
},
{
"docid": "6c2ac0d096c1bcaac7fd70bd36a5c056",
"text": "The purpose of this review is to illustrate the ways in which molecular neurobiological investigations will contribute to an improved understanding of drug addiction and, ultimately, to the development of more effective treatments. Such molecular studies of drug addiction are needed to establish two general types of information: (1) mechanisms of pathophysiology, identification of the changes that drugs of abuse produce in the brain that lead to addiction; and (2) mechanisms of individual risk, identification of specific genetic and environmental factors that increase or decrease an individual's vulnerability for addiction. This information will one day lead to fundamentally new approaches to the treatment and prevention of addictive disorders.",
"title": ""
},
{
"docid": "49b6bfaa3f681329522b5d8dd1277e97",
"text": "Pipeline-based applications have become an integral part of life. However, knowing that the pipeline systems can be largely deployed in an inaccessible and hazardous environment, active monitoring and frequent inspection of the pipeline systems are highly expensive using the traditional maintenance systems. Robot agents have been considered as an attractive alternative. Although many different types of pipeline exploration robots have been proposed, they were suffered from various limitations. In this paper, we present the design and implementation of a single-moduled fully autonomous mobile pipeline exploration robot, called FAMPER, that can be used for the inspection of 150mm pipelines. This robot consists of four wall-press caterpillars operated by two DC motors each. The speed of each caterpillar is controlled independently to provide steering capability to go through 45 degree elbows, 90 degree elbows, T-branches, and Y-branches. The uniqueness of this paper is to show the opportunity of using 4 caterpillar configuration for superior performance in all types of complex networks of pipelines. The robot system has been developed and experimented in different pipeline layouts.",
"title": ""
},
{
"docid": "318a4af201ed3563443dcbe89c90b6b4",
"text": "Clouds are distributed Internet-based platforms that provide highly resilient and scalable environments to be used by enterprises in a multitude of ways. Cloud computing offers enterprises technology innovation that business leaders and IT infrastructure managers can choose to apply based on how and to what extent it helps them fulfil their business requirements. It is crucial that all technical consultants have a rigorous understanding of the ramifications of cloud computing as its influence is likely to spread the complete IT landscape. Security is one of the major concerns that is of practical interest to decision makers when they are making critical strategic operational decisions. Distributed Denial of Service (DDoS) attacks are becoming more frequent and effective over the past few years, since the widely publicised DDoS attacks on the financial services industry that came to light in September and October 2012 and resurfaced in the past two years. In this paper, we introduce advanced cloud security technologies and practices as a series of concepts and technology architectures, from an industry-centric point of view. This is followed by classification of intrusion detection and prevention mechanisms that can be part of an overall strategy to help understand identify and mitigate potential DDoS attacks on business networks. The paper establishes solid coverage of security issues related to DDoS and virtualisation with a focus on structure, clarity, and well-defined blocks for mainstream cloud computing security solutions and platforms. In doing so, we aim to provide industry technologists, who may not be necessarily cloud or security experts, with an effective tool to help them understand the security implications associated with cloud adoption in their transition towards more knowledge-based systems. Keywords—Cloud Computing Security; Distributed Denial of Service; Intrusion Detection; Intrusion Prevention; Virtualisation",
"title": ""
},
{
"docid": "1b2fcf85bc73f3249d8685e0063aaa3a",
"text": "In our present society, the cinema has become one of the major forms of entertainment providing unlimited contexts of emotion elicitation for the emotional needs of human beings. Since emotions are universal and shape all aspects of our interpersonal and intellectual experience, they have proved to be a highly multidisciplinary research field, ranging from psychology, sociology, neuroscience, etc., to computer science. However, affective multimedia content analysis work from the computer science community benefits but little from the progress achieved in other research fields. In this paper, a multidisciplinary state-of-the-art for affective movie content analysis is given, in order to promote and encourage exchanges between researchers from a very wide range of fields. In contrast to other state-of-the-art papers on affective video content analysis, this work confronts the ideas and models of psychology, sociology, neuroscience, and computer science. The concepts of aesthetic emotions and emotion induction, as well as the different representations of emotions are introduced, based on psychological and sociological theories. Previous global and continuous affective video content analysis work, including video emotion recognition and violence detection, are also presented in order to point out the limitations of affective video content analysis work.",
"title": ""
},
{
"docid": "cfffdd632f03ce28c748fb11ced8dc67",
"text": "Multiple-Phased Systems (MPS), i.e., systems whose operational life can be partitioned in a set of disjoint periods, called \"phases\", include several classes of systems such as Phased Mission Systems and Scheduled Maintenance Systems. Because of their deployment in critical applications, the dependability modeling and analysis of Multiple-Phased Systems is a task of primary relevance. The phased behavior makes the analysis of Multiple-Phased Systems extremely complex. This paper describes the modeling methodology and the solution procedure implemented in DEEM, a dependability modeling and evaluation tool specifically tailored for Multiple Phased Systems. It also describes its use for the solution of representative MPS problems. DEEM relies upon Deterministic and Stochastic Petri Nets as the modeling formalism, and on Markov Regenerative Processes for the model solution. When compared to existing general-purpose tools based on similar formalisms, DEEM offers advantages on both the modeling side (sub-models neatly model the phase-dependent behaviors of MPS), and on the evaluation side (a specialized algorithm allows a considerable reduction of the solution cost and time). Thus, DEEM is able to deal with all the scenarios of MPS which have been analytically treated in the literature, at a cost which is comparable with that of the cheapest ones, completely solving the issues posed by the phased-behavior of MPS.",
"title": ""
}
] |
scidocsrr
|
0d48cb345837cad93ef7c25df3d87c9c
|
S-NFV: Securing NFV states by using SGX
|
[
{
"docid": "00a3504c21cf0a971a717ce676d76933",
"text": "In recent years, researchers have proposed systems for running trusted code on an untrusted operating system. Protection mechanisms deployed by such systems keep a malicious kernel from directly manipulating a trusted application's state. Under such systems, the application and kernel are, conceptually, peers, and the system call API defines an RPC interface between them.\n We introduce Iago attacks, attacks that a malicious kernel can mount in this model. We show how a carefully chosen sequence of integer return values to Linux system calls can lead a supposedly protected process to act against its interests, and even to undertake arbitrary computation at the malicious kernel's behest.\n Iago attacks are evidence that protecting applications from malicious kernels is more difficult than previously realized.",
"title": ""
},
{
"docid": "05fae4c840b1ee242a16a9db5eee4fb5",
"text": "Hardware technologies for trusted computing, or trusted execution environments (TEEs), have rapidly matured over the last decade. In fact, TEEs are at the brink of widespread commoditization with the recent introduction of Intel Software Guard Extensions (Intel SGX). Despite such rapid development of TEE, software technologies for TEE significantly lag behind their hardware counterpart, and currently only a select group of researchers have the privilege of accessing this technology. To address this problem, we develop an open source platform, called OpenSGX, that emulates Intel SGX hardware components at the instruction level and provides new system software components necessarily required for full TEE exploration. We expect that the OpenSGX framework can serve as an open platform for SGX research, with the following contributions. First, we develop a fully functional, instruction-compatible emulator of Intel SGX for enabling the exploration of software/hardware design space, and development of enclave programs. OpenSGX provides a platform for SGX development, meaning that it provides not just emulation but also operating system components, an enclave program loader/packager, an OpenSGX user library, debugging, and performance monitoring. Second, to show OpenSGX’s use cases, we applied OpenSGX to protect sensitive information (e.g., directory) of Tor nodes and evaluated their potential performance impacts. Therefore, we believe OpenSGX has great potential for broader communities to spark new research on soon-to-becommodity Intel SGX.",
"title": ""
}
] |
[
{
"docid": "c1f6052ecf802f1b4b2e9fd515d7ea15",
"text": "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.",
"title": ""
},
{
"docid": "6849618ccfe62e5474cd9d6735ac08ec",
"text": "The Dynamic Current or Dyna-C is a minimal topology for implementing the bidirectional three-phase solid-state transformer (SST). While only two current-source power conversion stages are employed, the Dyna-C SST has features of voltage step up/down, arbitrary power factors, and frequencies between the input and output terminals. In this paper, a compact 50-kVA three-phase SST based on this minimal topology is designed. More specifically, design considerations and practical implementation techniques are presented. Results from experimental measurements are shown and discussed.",
"title": ""
},
{
"docid": "f041a02b565ca9100d20b479fb6951c8",
"text": "Linear blending is a very popular skinning technique for virtual characters, even though it does not always generate realistic deformations. Recently, nonlinear blending techniques (such as dual quaternions) have been proposed in order to improve upon the deformation quality of linear skinning. The trade-off consists of the increased vertex deformation time and the necessity to redesign parts of the 3D engine. In this paper, we demonstrate that any nonlinear skinning technique can be approximated to an arbitrary degree of accuracy by linear skinning, using just a few samples of the nonlinear blending function (virtual bones). We propose an algorithm to compute this linear approximation in an automatic fashion, requiring little or no interaction with the user. This enables us to retain linear skinning at the core of our 3D engine without compromising the visual quality or character setup costs.",
"title": ""
},
{
"docid": "4b496b32df5d8697eb31d96878a1edcb",
"text": "Intelligent Speech Analysis (ISA) plays an essential role in smart conversational agent systems that aim to enable natural, intuitive, and friendly human computer interaction. It includes not only the long-term developed Automatic Speech Recognition (ASR), but also the young field of Computational Paralinguistics, which has attracted increasing attention in recent years. In real-world applications, however, several challenging issues surrounding data quantity and quality arise. For example, predefined databases for most paralinguistic tasks are normally quite small and few in number, which are insufficient for building a robust model. A distributed structure could be useful for data collection, but original feature sets are always too large to meet the physical transmission requirements, for example, bandwidth limitation. Furthermore, in a hands-free application scenario, reverberation severely distorts speech signals, which results in performance degradation of recognisers. To address these issues, this thesis proposes and analyses semi-autonomous data enrichment and optimisation approaches. More precisely, for the representative paralinguistic task of speech emotion recognition, both labelled and unlabelled data from heterogeneous resources are exploited by methods of data pooling, data selection, confidence-based semi-supervised learning, active learning, as well as cooperative learning. As a result, the manual work for data annotation is greatly reduced. With the advance of networks and information technologies, this thesis extends the traditional ISA system into a modern distributed paradigm, in which Split Vector Quantisation is employed for feature compression. Moreover, for distant-talk ASR, Long Short-Term Memory (LSTM) recurrent neural networks, which are known to be well-suited to context-sensitive pattern recognition, are evaluated to mitigate reverberation. The experimental results demonstrate that the proposed LSTM-based feature enhancement frameworks prevail over the current state-of-the-art methods.",
"title": ""
},
{
"docid": "a1b7f477c339f30587a2f767327b4b41",
"text": "Software game is a kind of application that is used not only for entertainment, but also for serious purposes that can be applicable to different domains such as education, business, and health care. Multidisciplinary nature of the game development processes that combine sound, art, control systems, artificial intelligence (AI), and human factors, makes the software game development practice different from traditional software development. However, the underline software engineering techniques help game development to achieve maintainability, flexibility, lower effort and cost, and better design. The purpose of this study is to assesses the state of the art research on the game development software engineering process and highlight areas that need further consideration by researchers. In the study, we used a systematic literature review methodology based on well-known digital libraries. The largest number of studies have been reported in the production phase of the game development software engineering process life cycle, followed by the pre-production phase. By contrast, the post-production phase has received much less research activity than the pre-production and production phases. The results of this study suggest that the game development software engineering process has many aspects that need further attention from researchers; that especially includes the postproduction phase.",
"title": ""
},
{
"docid": "9032262f04ca1d7974f54bf1278409cc",
"text": "Viewing new product development as a problem-solving activity, the focus in the article is on abduction as a problem-solving strategy. We identify the advantages of problem-solving by way of abduction, as compared to induction and deduction, present a rationale of how to decide between the different problem-solving strategies, and finally, draw on a case study of a main European car manufacturer to analyze how problem-solving by abduction is implemented in practice.",
"title": ""
},
{
"docid": "4f511a669a510153aa233d90da4e406a",
"text": "In many visual surveillance applications the task of person detection and localization can be solved easier by using thermal long-wave infrared (LWIR) cameras which are less affected by changing illumination or background texture than visual-optical cameras. Especially in outdoor scenes where usually only few hot spots appear in thermal infrared imagery, humans can be detected more reliably due to their prominent infrared signature. We propose a two-stage person recognition approach for LWIR images: (1) the application of Maximally Stable Extremal Regions (MSER) to detect hot spots instead of background subtraction or sliding window and (2) the verification of the detected hot spots using a Discrete Cosine Transform (DCT) based descriptor and a modified Random Naïve Bayes (RNB) classifier. The main contributions are the novel modified RNB classifier and the generality of our method. We achieve high detection rates for several different LWIR datasets with low resolution videos in real-time. While many papers in this topic are dealing with strong constraints such as considering only one dataset, assuming a stationary camera, or detecting only moving persons, we aim at avoiding such constraints to make our approach applicable with moving platforms such as Unmanned Ground Vehicles (UGV).",
"title": ""
},
{
"docid": "529edb7ca367261731a154c24512d288",
"text": "OBJECTIVE\nA depressive disorder is an illness that involves the body, mood, thoughts and behaviors. This study was performed to identify the presence of depression among medical students of Urmia University of Medical Sciences.\n\n\nMETHODS\nA descriptive cross-sectional study was conducted on 700 undergraduate medical and basic sciences students. Beck depression inventory (BDI) used for data gathering.\n\n\nRESULTS\nMean score of BDI was 10.4 ± 0.8 and 52.6% of students scored under the depression threshold. Four of them had severe depression. RESULTS showed no significant relationship between depression and age, education, sex, rank of birth or duration of education.\n\n\nCONCLUSION\nPrevalence of depression that can affect the students' quality of education and social behavior was high in Urmia University of Medical Sciences.",
"title": ""
},
{
"docid": "1162b3b710c643aba015c751ac5b8107",
"text": "Machine learning is now being used to make crucial decisions about people’s lives. For nearly all of these decisions there is a risk that individuals of a certain race, gender, sexual orientation, or any other subpopulation are unfairly discriminated against. Our recent method has demonstrated how to use techniques from counterfactual inference to make predictions fair across different subpopulations. This method requires that one provides the causal model that generated the data at hand. In general, validating all causal implications of the model is not possible without further assumptions. Hence, it is desirable to integrate competing causal models to provide counterfactually fair decisions, regardless of which causal “world” is the correct one. In this paper, we show how it is possible to make predictions that are approximately fair with respect to multiple possible causal models at once, thus mitigating the problem of exact causal specification. We frame the goal of learning a fair classifier as an optimization problem with fairness constraints entailed by competing causal explanations. We show how this optimization problem can be efficiently solved using gradient-based methods. We demonstrate the flexibility of our model on two real-world fair classification problems. We show that our model can seamlessly balance fairness in multiple worlds with prediction accuracy.",
"title": ""
},
{
"docid": "c5e648beeed1f968388663f9a8c1b494",
"text": "OBJECTIVE\nTo determine those objective measurements that characterize the differences between the external genital organs of pre- and postmenopausal women.\n\n\nMETHODS\nDuring the study period, 50 premenopausal and 50 postmenopausal patients were recruited. Only women who were admitted for routine control examinations were consecutively included in the study. Exclusion criteria were previous history of pelvic surgery including external and internal genital organs, presence of diseases that may change the anatomy of external genital organs, Mullerian anomalies, previous vaginal birth with mediolateral episiotomy, and use of hormone replacement therapy. The following measurements were performed: length and width of clitoris, labium majus, and labium minus, the distance between the clitoris and urethra, perineal length, and length of vagina.\n\n\nRESULTS\nThe length of the vagina and the width of the labium minus were significantly different between the two groups. Mean vaginal length was significantly longer in premenopausal women compared to postmenopausal women (90.3 +/- 14.8 mm vs. 82.3 +/- 11.2 mm, respectively). The labia minora were wider in premenopausal women than in postmenopausal women (17.9 +/- 4.1 mm vs. 15.4 +/- 4.7 mm).\n\n\nCONCLUSIONS\nCharacterization of the anatomical changes and relationships of external genitalia in postmenopausal women is important for functional and perioperative evaluation. In addition to reconstructive surgical procedures, determination of the objective measurements of anatomical landmarks in postmenopausal external genitalia might also be useful for assessing the results of treatment of 'atrophic' changes in women.",
"title": ""
},
{
"docid": "f33b73bf41e5253fb4b043a117fcd9e2",
"text": "Traditional information systems return answers after a user submits a complete query. Users often feel \"left in the dark\" when they have limited knowledge about the underlying data, and have to use a try-and-see approach for finding information. A recent trend of supporting autocomplete in these systems is a first step towards solving this problem. In this paper, we study a new information-access paradigm, called \"interactive, fuzzy search,\" in which the system searches the underlying data \"on the fly\" as the user types in query keywords. It extends autocomplete interfaces by (1) allowing keywords to appear in multiple attributes (in an arbitrary order) of the underlying data; and (2) finding relevant records that have keywords matching query keywords approximately. This framework allows users to explore data as they type, even in the presence of minor errors. We study research challenges in this framework for large amounts of data. Since each keystroke of the user could invoke a query on the backend, we need efficient algorithms to process each query within milliseconds. We develop various incremental-search algorithms using previously computed and cached results in order to achieve an interactive speed. We have deployed several real prototypes using these techniques. One of them has been deployed to support interactive search on the UC Irvine people directory, which has been used regularly and well received by users due to its friendly interface and high efficiency.",
"title": ""
},
{
"docid": "ce1e222bae70cdc4ac22189e4fd9c69f",
"text": "In the era of big data, the amount of data that individuals and enterprises hold is increasing, and the efficiency and effectiveness of data analysis are increasingly demanding. Collaborative deep learning, as a machine learning framework that can share users' data and improve learning efficiency, has drawn more and more attention and started to be applied in practical problems. In collaborative deep learning, data sharing and interaction among multi users may lead data leakage especially when data are very sensitive to the user. Therefore, how to protect the data privacy when processing collaborative deep learning becomes an important problem. In this paper, we review the current state of art researches in this field and summarize the application of privacy-preserving technologies in two phases of collaborative deep learning. Finally we discuss the future direction and trend on this problem.",
"title": ""
},
{
"docid": "22d5dd06ca164aa0b012b0764d7c4440",
"text": "As multicore architectures enter the mainstream, there is a pressing demand for high-level programming models that can effectively map to them. Stream programming offers an attractive way to expose coarse-grained parallelism, as streaming applications (image, video, DSP, etc.) are naturally represented by independent filters that communicate over explicit data channels.In this paper, we demonstrate an end-to-end stream compiler that attains robust multicore performance in the face of varying application characteristics. As benchmarks exhibit different amounts of task, data, and pipeline parallelism, we exploit all types of parallelism in a unified manner in order to achieve this generality. Our compiler, which maps from the StreamIt language to the 16-core Raw architecture, attains a 11.2x mean speedup over a single-core baseline, and a 1.84x speedup over our previous work.",
"title": ""
},
{
"docid": "83f88cbaed86220e0047b51c965a77ba",
"text": "There are two conflicting perspectives regarding the relationship between profanity and dishonesty. These two forms of norm-violating behavior share common causes and are often considered to be positively related. On the other hand, however, profanity is often used to express one's genuine feelings and could therefore be negatively related to dishonesty. In three studies, we explored the relationship between profanity and honesty. We examined profanity and honesty first with profanity behavior and lying on a scale in the lab (Study 1; N = 276), then with a linguistic analysis of real-life social interactions on Facebook (Study 2; N = 73,789), and finally with profanity and integrity indexes for the aggregate level of U.S. states (Study 3; N = 50 states). We found a consistent positive relationship between profanity and honesty; profanity was associated with less lying and deception at the individual level and with higher integrity at the society level.",
"title": ""
},
{
"docid": "2c1890f9593b77e9a6fc8cf78e9d0130",
"text": "In our study, we tested the hypothesis whether valproic acid (VPA) in therapeutic concentrations has potential to affect expression of CYP3A4 and MDR1 via constitutive androstane receptor (CAR) and pregnane X receptor (PXR) pathways. Interaction of VPA with CAR and PXR nuclear receptors was studied using luciferase reporter assays, real-time reverse transcriptase polymerase chain reaction (RT-PCR), electrophoretic mobility shift assay (EMSA), and analysis of CYP3A4 catalytic activity. Using transient transfection reporter assays in HepG2 cells, VPA was recognized to activate CYP3A4 promoter via CAR and PXR pathways. By contrast, a significant effect of VPA on MDR1 promoter activation was observed only in CAR-cotransfected HepG2 cells. These data well correlated with up-regulation of CYP3A4 and MDR1 mRNAs analyzed by real-time RT-PCR in cells transfected with expression vectors encoding CAR or PXR and treated with VPA. In addition, VPA significantly up-regulated CYP3A4 mRNA in primary hepatocytes and augmented the effect of rifampicin. EMSA experiments showed VPA-mediated augmentation of CAR/retinoid X receptor alpha heterodimer binding to direct repeat 3 (DR3) and DR4 responsive elements of CYP3A4 and MDR1 genes, respectively. Finally, analysis of specific CYP3A4 catalytic activity revealed its significant increase in VPA-treated LS174T cells transfected with PXR. In conclusion, we provide novel insight into the mechanism by which VPA affects gene expression of CYP3A4 and MDR1 genes. Our results demonstrate that VPA has potential to up-regulate CYP3A4 and MDR1 through direct activation of CAR and/or PXR pathways. Furthermore, we suggest that VPA synergistically augments the effect of rifampicin in transactivation of CYP3A4 in primary human hepatocytes.",
"title": ""
},
{
"docid": "ffc09744f2668e52ce84ac28887fd5fe",
"text": "As the number of research papers available on the Web has increased enormously over the years, paper recommender systems have been proposed to help researchers on automatically finding works of interest. The main problem with the current approaches is that they assume that recommending algorithms are provided with a rich set of evidence (e.g., document collections, citations, profiles) which is normally not widely available. In this paper we propose a novel source independent framework for research paper recommendation. The framework requires as input only a single research paper and generates several potential queries by using terms in that paper, which are then submitted to existing Web information sources that hold research papers. Once a set of candidate papers for recommendation is generated, the framework applies content-based recommending algorithms to rank the candidates in order to recommend the ones most related to the input paper. This is done by using only publicly available metadata (i.e., title and abstract). We evaluate our proposed framework by performing an extensive experimentation in which we analyzed several strategies for query generation and several ranking strategies for paper recommendation. Our results show that good recommendations can be obtained with simple and low cost strategies.",
"title": ""
},
{
"docid": "ff6ab778ec692f4b8e86da6f573d7d0b",
"text": "Despite the enormous popularity of Online Social Networking sites (OSNs; e.g., Facebook and Myspace), little research in psychology has been done on them. Two studies examining how personality is reflected in OSNs revealed several connections between the Big Five personality traits and self-reported Facebook-related behaviors and observable profile information. For example, extraversion predicted not only frequency of Facebook usage (Study 1), but also engagement in the site, with extraverts (vs. introverts) showing traces of higher levels of Facebook activity (Study 2). As in offline contexts, extraverts seek out virtual social engagement, which leaves behind a behavioral residue in the form of friends lists and picture postings. Results suggest that, rather than escaping from or compensating for their offline personality, OSN users appear to extend their offline personalities into the domains of OSNs.",
"title": ""
},
{
"docid": "ae687136682fd78e9a92797c2c24ddb0",
"text": "Not all global health issues are truly global, but the neglected epidemic of stillbirths is one such urgent concern. The Lancet’s fi rst Series on stillbirths was published in 2011. Thanks to tenacious eff orts by the authors of that Series, led by Joy Lawn, together with the impetus of a wider maternal and child health community, stillbirths have been recognised as an essential part of the post-2015 sustainable development agenda, expressed through a new Global Strategy for Women’s, Children’s and Adolescents’ Health which was launched at the UN General Assembly in 2015. But recognising is not the same as doing. We now present a second Series on stillbirths, which is predicated on the idea of ending preventable stillbirth deaths by 2030. As this Series amply proves, such an ambitious goal is possible. The fi ve Series papers off er a roadmap for eliminating one of the most neglected tragedies in global health today. Perhaps the greatest obstacle to addressing stillbirths is stigma. The utter despair and hopelessness felt by families who suff er a stillbirth is often turned inwards to fuel feelings of shame and failure. The idea of demanding action would be anathema for many women and men who have experienced the loss of a child in this appalling way. This Series dispels any notion that such self-recrimination is justifi ed. Most stillbirths have preventable causes—maternal infections, chronic diseases, undernutrition, obesity, to name only a few. The solutions to ending preventable stillbirths are therefore practicable, feasible, and cost eff ective. They form a core part of the continuum of care—from prenatal care and antenatal care, through skilled birth attendance, to newborn care. The number of stillbirths remains alarmingly high: 2·6 million stillbirths annually, with little reduction this past decade. But the truly horrifi c fi gure is 1·3 million intrapartum stillbirths. The idea of a child being alive at the beginning of labour and dying for entirely preventable reasons during the next few hours should be a health scandal of international proportions. Yet it is not. Our Series aims to make it so. When a stillbirth does occur, the health system can fail parents further by the absence of respectful, empathetic services, including bereavement care. Yet provision of such care is not only humane and necessary, it can also mitigate a range of negative emotional and psychological symptoms that mothers and fathers experience after the death of their baby, some of which can persist long after their loss. Ten nations account for two-thirds of stillbirths: India, Nigeria, Pakistan, China, Ethiopia, Democratic Republic of the Congo, Bangladesh, Indonesia, Tanzania, and Niger. Although 98% of stillbirths take place in low-income and middle-income countries, stillbirth rates also remain unacceptably high in high-income settings. Why? Partly because stillbirths are strongly linked to adverse social and economic determinants of health. The health system alone cannot address entirely the predicament of stillbirths. Only by tackling the causes of the causes of stillbirths will rates be defl ected downwards in high-income settings. There is one action we believe off ers promising prospects for accelerating progress to end stillbirths—stronger independent accountability both within countries and globally. By accountability, we mean better monitoring (with investment in high-quality data collection), stronger review (including, especially, civil society organisations), and more robust action (high-level political leadership, and not merely from a Ministry of Health). The UN’s new Independent Accountability Panel has an important part to play in this process. But the really urgent need is for stronger independent accountability in countries. And here is where a virtuous alliance might lie between health professionals, clinical and public health scientists, and civil society, including bereaved parents. We believe this Series off ers the spark to ignite a new alliance of common interests to end preventable stillbirths by 2030.",
"title": ""
},
{
"docid": "9db418e478634d17786d7844c0878475",
"text": "The voltage controlled oscillator (VCO) is a critical sub-block in communications transceivers. The role of the VCO in a transceiver and the VCO requirements are first reviewed. The necessity of GHz VCOs and the driving factors towards the monolithic integration of the VCO are examined. VCO design techniques are outlined and design trade-offs are explored. The performance of VCOs in different implementation styles is compared to evaluate when and if VCO integration is desirable.",
"title": ""
},
{
"docid": "be9d62529c7d91941812392bc545eec2",
"text": "In this paper, we focus on a new problem: applying artificial intelligence to automatically generate fashion style images. Given a basic clothing image and a fashion style image (e.g., leopard print), we generate a clothing image with the certain style in real time with a neural fashion style generator. Fashion style generation is related to recent artistic style transfer works, but has its own challenges. The synthetic image should preserve the similar design as the basic clothing, and meanwhile blend the new style pattern on the clothing. Neither existing global nor patch based neural style transfer methods could well solve these challenges. In this paper, we propose an end-to-end feed-forward neural network which consists of a fashion style generator and a discriminator. The global and patch based style and content losses calculated by the discriminator alternatively back-propagate the generator network and optimize it. The global optimization stage preserves the clothing form and design and the local optimization stage preserves the detailed style pattern. Extensive experiments show that our method outperforms the state-of-the-arts.",
"title": ""
}
] |
scidocsrr
|
f58003a4687770495cd4af6a2447409f
|
Large-Vocabulary Speech Recognition Algorithms
|
[
{
"docid": "1738a8ccb1860e5b85e2364f437d4058",
"text": "We describe a new algorithm for finding the hypothesis in a recognition lattice that is expected to minimize the word er ror rate (WER). Our approach thus overcomes the mismatch between the word-based performance metric and the standard MAP scoring paradigm that is sentence-based, and that can le ad to sub-optimal recognition results. To this end we first find a complete alignment of all words in the recognition lattice, identifying mutually supporting and competing word hypotheses . Finally, a new sentence hypothesis is formed by concatenating the words with maximal posterior probabilities. Experimental ly, this approach leads to a significant WER reduction in a large vocab ulary recognition task.",
"title": ""
}
] |
[
{
"docid": "8a3dba8aa5aa8cf69da21079f7e36de6",
"text": "This letter presents a novel technique for synthesis of coupled-resonator filters with inter-resonator couplings varying linearly with frequency. The values of non-zero elements of the coupling matrix are found by solving a nonlinear least squares problem involving eigenvalues of matrix pencils derived from the coupling matrix and reference zeros and poles of scattering parameters. The proposed method was verified by numerical tests carried out for various coupling schemes including triplets and quadruplets for which the frequency-dependent coupling was found to produce an extra zero.",
"title": ""
},
{
"docid": "4dbead8a5316bc51e357867db4731561",
"text": "Fingerprint systems have received a great deal of research and attracted many researchers’ effort since they provide a powerful tool for access control and security and for practical applications. A literature review of the techniques used to extract the features of fingerprint as well as recognition techniques is given in this paper. Some of the reviewed research articles have used traditional methods such as recognition techniques, whereas the other articles have used neural networks methods. In addition, fingerprint techniques of enhancement are introduced.",
"title": ""
},
{
"docid": "87e56672751a8eb4d5a08f0459e525ca",
"text": "— The Internet of Things (IoT) has transformed many aspects of modern manufacturing, from design to production to quality control. In particular, IoT and digital manufacturing technologies have substantially accelerated product development cycles and manufacturers can now create products of a complexity and precision not heretofore possible. New threats to supply chain security have arisen from connecting machines to the Internet and introducing complex IoT-based systems controlling manufacturing processes. By attacking these IoT-based manufacturing systems and tampering with digital files, attackers can manipulate physical characteristics of parts and change the dimensions, shapes, or mechanical properties of the parts, which can result in parts that fail in the field. These defects increase manufacturing costs and allow silent problems to occur only under certain loads that can threaten safety and/or lives. To understand potential dangers and protect manufacturing system safety, this paper presents two taxonomies: one for classifying cyber-physical attacks against manufacturing processes and another for quality control measures for counteracting these attacks. We systematically identify and classify possible cyber-physical attacks and connect the attacks with variations in manufacturing processes and quality control measures. Our tax-onomies also provide a scheme for linking emerging IoT-based manufacturing system vulnerabilities to possible attacks and quality control measures.",
"title": ""
},
{
"docid": "fbcab4ec5e941858efe7e72db910de67",
"text": "Previously published guidelines provide comprehensive recommendations for hand hygiene in healthcare facilities. The intent of this document is to highlight practical recommendations in a concise format, update recommendations with the most current scientific evidence, and elucidate topics that warrant clarification or more robust research. Additionally, this document is designed to assist healthcare facilities in implementing hand hygiene adherence improvement programs, including efforts to optimize hand hygiene product use, monitor and report back hand hygiene adherence data, and promote behavior change. This expert guidance document is sponsored by the Society for Healthcare Epidemiology of America (SHEA) and is the product of a collaborative effort led by SHEA, the Infectious Diseases Society of America (IDSA), the American Hospital Association (AHA), the Association for Professionals in Infection Control and Epidemiology (APIC), and The Joint Commission, with major contributions from representatives of a number of organizations and societies with content expertise. The list of endorsing and supporting organizations is presented in the introduction to the 2014 updates.",
"title": ""
},
{
"docid": "bfbca1007aff8f95e843e5530a833fb9",
"text": "Airborne wind energy systems aim to generate renewable energy by means of the aerodynamic lift produced using a wing tethered to the ground and controlled to fly crosswind paths. The problem of maximizing the average power developed by the generator, in the presence of limited information on wind speed and direction, is considered. At constant tether speed operation, the power is related to the traction force generated by the wing. First, a study of the traction force is presented for a general path parametrization. In particular, the sensitivity of the traction force on the path parameters is analyzed. Then, the results of this analysis are exploited to design an algorithm to maximize the force, hence the power, in real-time. The algorithm uses only the measured traction force on the tether and the wing's position, and it is able to adapt the system's operation to maximize the average force with uncertain and time-varying wind. The influence of inaccurate sensor readings and turbulent wind are also discussed. The presented algorithm is not dependent on a specific hardware setup and can act as an extension of existing control structures. Both numerical simulations and experimental results are presented to highlight the effectiveness of the approach.",
"title": ""
},
{
"docid": "d513e7f66de64e90b93dcf02ae2ccfb3",
"text": "The first aim of this investigation was to assemble a group of photographs of 30 male and 30 female faces representing a standardized spectrum of facial attractiveness, against which orthognathic treatment outcomes could be compared. The second aim was to investigate the influence of the relationship between ANB differences and anterior lower face height (ALFH) percentages on facial attractiveness. The initial sample comprised standardized photographs of 41 female and 35 male Caucasian subjects. From these, the photographs of two groups of 30 male and 30 female subjects were compiled. A panel of six clinicians and six non-clinicians ranked the photographs. The results showed there to be a good level of reliability for each assessor when ranking the photographs on two occasions, particularly for the clinicians (female subjects r = 0.76-0.97, male subjects r = 0.72-0.94). Agreement among individuals within each group was also high, particularly when ranking facial attractiveness in male subjects (female subjects r = 0.57-0.84, male subjects r = 0.91-0.94). Antero-posterior (AP) discrepancies, as measured by soft tissue ANB, showed minimal correlation with facial attractiveness. However, a trend emerged that would suggest that in faces where the ANB varies widely from 5 degrees, the face is considered less attractive. The ALFH percentage also showed minimal correlation with facial attractiveness. However, there was a trend that suggested that greater ALFH percentages are considered less attractive in female faces, while in males the opposite trend was seen. Either of the two series of ranked photographs as judged by clinicians and non-clinicians could be used as a standard against which facial attractiveness could be assessed, as both were in total agreement about the most attractive faces. However, to judge the outcome of orthognathic treatment, the series of ranked photographs produced by the non-clinician group should be used as the 'standard' to reflect lay opinion.",
"title": ""
},
{
"docid": "b96836da7518ceccace39347f06067c6",
"text": "A number of visual question answering approaches have been proposed recently, aiming at understanding the visual scenes by answering the natural language questions. While the image question answering has drawn significant attention, video question answering is largely unexplored. Video-QA is different from Image-QA since the information and the events are scattered among multiple frames. In order to better utilize the temporal structure of the videos and the phrasal structures of the answers, we propose two mechanisms: the re-watching and the re-reading mechanisms and combine them into the forgettable-watcher model. Then we propose a TGIF-QA dataset for video question answering with the help of automatic question generation. Finally, we evaluate the models on our dataset. The experimental results show the effectiveness of our proposed models.",
"title": ""
},
{
"docid": "fccbcdff722a297e5a389674d7557a18",
"text": "For the last few decades more than twenty standardized usability questionnaires for evaluating software systems have been proposed. These instruments have been widely used in the assessment of usability of user interfaces. They have their own characteristics, can be generic or address specific kinds of systems and can be composed of one or several items. Some comparison or comparative studies were also conducted to identify the best one in different situations. All these issues should be considered while choosing a questionnaire. In this paper, we present an extensive review of these questionnaires considering their key features, some classifications and main comparison studies already performed. Moreover, we present the result of a detailed analysis of all items being evaluated in each questionnaire to indicate those that can identify users’ perceptions about specific usability problems. This analysis was performed by confronting each questionnaire item (around 475 items) with usability criteria proposed by quality standards (ISO 9421-11 and ISO/WD 9241-112) and classical quality ergonomic criteria.",
"title": ""
},
{
"docid": "2b6d63bde28ee4a70c73cc25556262db",
"text": "Sentiment analysis or Opinion mining is becoming an important task both from academics and commercial standpoint. In recent years text mining has become most promising area for research. There is an exponential growth with respect to World Wide Web, Mobile Technologies, Internet usage and business on electronic commerce applications. Because of which web opinion sources like online shopping portals, discussion forums, peer-to-peer networks, groups, blogs, micro blogs and social networking applications are extensively used to share the information, experience and opinions. In sentiment analysis, the opinion is evaluated to its positivity, negativity and neutrality with respect to the complete document or object. But this level of analysis does not provide the necessary detailed information for many applications. To obtain more fine grained analysis we need to go to Aspect Based Sentiment Analysis (ABSA). Aspect Based Sentiment analysis introduces a suite of problems which require deeper NLP capabilities and also produces a rich set of results.",
"title": ""
},
{
"docid": "b525081979bebe54e2262086170cbb31",
"text": " Activity recognition strategies assume large amounts of labeled training data which require tedious human labor to label. They also use hand engineered features, which are not best for all applications, hence required to be done separately for each application. Several recognition strategies have benefited from deep learning for unsupervised feature selection, which has two important property – fine tuning and incremental update. Question! Can deep learning be leveraged upon for continuous learning of activity models from streaming videos? Contributions",
"title": ""
},
{
"docid": "d4e26b428f59666e4f83a0c219e4e028",
"text": "Tensor Train decomposition is used across many branches of machine learning, but until now it lacked an implementation with GPU support, batch processing, automatic differentiation, and versatile functionality for Riemannian optimization framework, which takes in account the underlying manifold structure in order to construct efficient optimization methods. In this work, we propose a library that aims to fix it and makes machine learning papers that rely on Tensor Train decomposition easier to implement. The library includes 92% test coverage, examples, and API reference documentation.",
"title": ""
},
{
"docid": "f76eae1326c6767c520bc4d318b239fd",
"text": "A challenging goal of generative and developmental systems (GDS) is to effectively evolve neural networks as complex and capable as those found in nature. Two key properties of neural structures in nature are regularity and modularity. While HyperNEAT has proven capable of generating neural network connectivity patterns with regularities, its ability to evolve modularity remains in question. This paper investigates how altering the traditional approach to determining whether connections are expressed in HyperNEAT influences modularity. In particular, an extension is introduced called a Link Expression Output (HyperNEAT-LEO) that allows HyperNEAT to evolve the pattern of weights independently from the pattern of connection expression. Because HyperNEAT evolves such patterns as functions of geometry, important general topographic principles for organizing connectivity can be seeded into the initial population. For example, a key topographic concept in nature that encourages modularity is locality, that is, components of a module are located near each other. As experiments in this paper show, by seeding HyperNEAT with a bias towards local connectivity implemented through the LEO, modular structures arise naturally. Thus this paper provides an important clue to how an indirect encoding of network structure can be encouraged to evolve modularity.",
"title": ""
},
{
"docid": "a920ed7775a73791946eb5610387bc23",
"text": "A limiting factor for photosynthetic organisms is their light-harvesting efficiency, that is the efficiency of their conversion of light energy to chemical energy. Small modifications or variations of chlorophylls allow photosynthetic organisms to harvest sunlight at different wavelengths. Oxygenic photosynthetic organisms usually utilize only the visible portion of the solar spectrum. The cyanobacterium Acaryochloris marina carries out oxygenic photosynthesis but contains mostly chlorophyll d and only traces of chlorophyll a. Chlorophyll d provides a potential selective advantage because it enables Acaryochloris to use infrared light (700-750 nm) that is not absorbed by chlorophyll a. Recently, an even more red-shifted chlorophyll termed chlorophyll f has been reported. Here, we discuss using modified chlorophylls to extend the spectral region of light that drives photosynthetic organisms.",
"title": ""
},
{
"docid": "0caa6d4623fb0414facb76ccd8eaa235",
"text": "Because of large amounts of unstructured text data generated on the Internet, text mining is believed to have high commercial value. Text mining is the process of extracting previously unknown, understandable, potential and practical patterns or knowledge from the collection of text data. This paper introduces the research status of text mining. Then several general models are described to know text mining in the overall perspective. At last we classify text mining work as text categorization, text clustering, association rule extraction and trend analysis according to applications.",
"title": ""
},
{
"docid": "f099eeead6741665f061fcfe736c5c9f",
"text": "For many applications, in particular in natural science, the task is to determine hidden system parameters from a set of measurements. Often, the forward process from parameterto measurement-space is well-defined, whereas the inverse problem is ambiguous: multiple parameter sets can result in the same measurement. To fully characterize this ambiguity, the full posterior parameter distribution, conditioned on an observed measurement, has to be determined. We argue that a particular class of neural networks is well suited for this task – so-called Invertible Neural Networks (INNs). Unlike classical neural networks, which attempt to solve the ambiguous inverse problem directly, INNs focus on learning the forward process, using additional latent output variables to capture the information otherwise lost. Due to invertibility, a model of the corresponding inverse process is learned implicitly. Given a specific measurement and the distribution of the latent variables, the inverse pass of the INN provides the full posterior over parameter space. We prove theoretically and verify experimentally, on artificial data and real-world problems from medicine and astrophysics, that INNs are a powerful analysis tool to find multi-modalities in parameter space, uncover parameter correlations, and identify unrecoverable parameters.",
"title": ""
},
{
"docid": "c5650f5f5efd4a3ba9a79d51105db057",
"text": "Ptosis of the earlobe is a common consequence of ageing, defined as an unappealingly large free caudal segment of over 5 mm. It is therefore important to consider reduction as a complement to rhytidectomy in selected patients. Moreover, facelifting operations can result in disproportionate or poorly positioned earlobes. Current earlobe-reducing techniques can leave a scar on the free lateral edge causing notching or involve complex pattern excisions with limited resection capability and the risk of deformities. The presented technique, on the other hand, is versatile and easy to use, as it follows general geometric principles. Excision of the designed area results in an earlobe flap which can be rotated in the excision defect. This results in ideal scar locations, situated at the sub-antitragal groove and at the cheek junction. The technique is adjustable, to incorporate potential piercing holes. This technique takes approximately 15 minutes per earlobe to complete. The resulting earlobes have undisturbed free borders. No vascularization-related flap problems were noted. This technique is a viable method for reducing the earlobe with minimally visible scars. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .",
"title": ""
},
{
"docid": "931172f9cd6de70cbd303f4e9ac27a66",
"text": "The development of a complex game is a time consuming task that requires a significant amount of content generation, including terrains, objects, characters, etc that requires a lot of effort from the a designing team. The quality of such content impacts the project costs and budget. One of the biggest challenges concerning the content is how to improve its details and at the same time lower the creation costs. In this context procedural content generation techniques can help to reduce the costs associated with content creation. This paper presents a survey of classical and modern techniques focused on procedural content generation suitable for game development. They can be used to produce terrains, coastlines, rivers, roads and cities. All techniques are classified as assisted (require human intervention/guidance in order to produce results) or non-assisted (require few or no human intervention/guidance to produce the desired results).",
"title": ""
},
{
"docid": "8be921cfab4586b6a19262da9a1637de",
"text": "Automatic segmentation of microscopy images is an important task in medical image processing and analysis. Nucleus detection is an important example of this task. Mask-RCNN is a recently proposed state-of-the-art algorithm for object detection, object localization, and object instance segmentation of natural images. In this paper we demonstrate that Mask-RCNN can be used to perform highly effective and efficient automatic segmentations of a wide range of microscopy images of cell nuclei, for a variety of cells acquired under a variety of conditions.",
"title": ""
},
{
"docid": "5f6b248776b3b7ad7a840ac5224587be",
"text": "We present in this paper a superpixel segmentation algorithm called Linear Spectral Clustering (LSC), which produces compact and uniform superpixels with low computational costs. Basically, a normalized cuts formulation of the superpixel segmentation is adopted based on a similarity metric that measures the color similarity and space proximity between image pixels. However, instead of using the traditional eigen-based algorithm, we approximate the similarity metric using a kernel function leading to an explicitly mapping of pixel values and coordinates into a high dimensional feature space. We revisit the conclusion that by appropriately weighting each point in this feature space, the objective functions of weighted K-means and normalized cuts share the same optimum point. As such, it is possible to optimize the cost function of normalized cuts by iteratively applying simple K-means clustering in the proposed feature space. LSC is of linear computational complexity and high memory efficiency and is able to preserve global properties of images. Experimental results show that LSC performs equally well or better than state of the art superpixel segmentation algorithms in terms of several commonly used evaluation metrics in image segmentation.",
"title": ""
},
{
"docid": "1c386cf468f62a812640b7f8b528bb73",
"text": "An efficient nanomedical platform that can combine two-photon cell imaging, near infrared (NIR) light and pH dual responsive drug delivery, and photothermal treatment was successfully developed based on fluorescent porous carbon-nanocapsules (FPC-NCs, size ∼100 nm) with carbon dots (CDs) embedded in the shell. The stable, excitation wavelength (λex)-tunable and upconverted fluorescence from the CDs embedded in the porous carbon shell enable the FPC-NCs to serve as an excellent confocal and two-photon imaging contrast agent under the excitation of laser with a broad range of wavelength from ultraviolet (UV) light (405 nm) to NIR light (900 nm). The FPC-NCs demonstrate a very high loading capacity (1335 mg g(-1)) toward doxorubicin drug benefited from the hollow cavity structure, porous carbon shell, as well as the supramolecular π stacking and electrostatic interactions between the doxorubicin molecules and carbon shell. In addition, a responsive release of doxorubicin from the FPC-NCs can be activated by lowering the pH to acidic (from 7.4 to 5.0) due to the presence of pH-sensitive carboxyl groups on the FPC-NCs and amino groups on doxorubicin molecules. Furthermore, the FPC-NCs can absorb and effectively convert the NIR light to heat, thus, manifest the ability of NIR-responsive drug release and combined photothermal/chemo-therapy for high therapeutic efficacy.",
"title": ""
}
] |
scidocsrr
|
a37ff69813766d924b8deecb55872580
|
Extraction and Integration of Partially Overlapping Web Sources
|
[
{
"docid": "a15f80b0a0ce17ec03fa58c33c57d251",
"text": "The World-Wide Web consists of a huge number of unstructured documents, but it also contains structured data in the form of HTML tables. We extracted 14.1 billion HTML tables from Google’s general-purpose web crawl, and used statistical classification techniques to find the estimated 154M that contain high-quality relational data. Because each relational table has its own “schema” of labeled and typed columns, each such table can be considered a small structured database. The resulting corpus of databases is larger than any other corpus we are aware of, by at least five orders of magnitude. We describe the WebTables system to explore two fundamental questions about this collection of databases. First, what are effective techniques for searching for structured data at search-engine scales? Second, what additional power can be derived by analyzing such a huge corpus? First, we develop new techniques for keyword search over a corpus of tables, and show that they can achieve substantially higher relevance than solutions based on a traditional search engine. Second, we introduce a new object derived from the database corpus: the attribute correlation statistics database (AcsDB) that records corpus-wide statistics on cooccurrences of schema elements. In addition to improving search relevance, the AcsDB makes possible several novel applications: schema auto-complete, which helps a database designer to choose schema elements; attribute synonym finding, which automatically computes attribute synonym pairs for schema matching; and join-graph traversal, which allows a user to navigate between extracted schemas using automatically-generated join links. ∗Work done while all authors were at Google, Inc. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commer cial advantage, the VLDB copyright notice and the title of the publication an d its date appear, and notice is given that copying is by permission of the Very L arge Data Base Endowment. To copy otherwise, or to republish, to post o n servers or to redistribute to lists, requires a fee and/or special pe rmission from the publisher, ACM. VLDB ’08 Auckland, New Zealand Copyright 2008 VLDB Endowment, ACM 000-0-00000-000-0/00/ 00.",
"title": ""
},
{
"docid": "c9e47bfe0f1721a937ba503ed9913dba",
"text": "The Web contains a vast amount of structured information such as HTML tables, HTML lists and deep-web databases; there is enormous potential in combining and re-purposing this data in creative ways. However, integrating data from this relational web raises several challenges that are not addressed by current data integration systems or mash-up tools. First, the structured data is usually not published cleanly and must be extracted (say, from an HTML list) before it can be used. Second, due to the vastness of the corpus, a user can never know all of the potentially-relevant databases ahead of time (much less write a wrapper or mapping for each one); the source databases must be discovered during the integration process. Third, some of the important information regarding the data is only present in its enclosing web page and needs to be extracted appropriately. This paper describes Octopus, a system that combines search, extraction, data cleaning and integration, and enables users to create new data sets from those found on the Web. The key idea underlying Octopus is to offer the user a set of best-effort operators that automate the most labor-intensive tasks. For example, the Search operator takes a search-style keyword query and returns a set of relevance-ranked and similarity-clustered structured data sources on the Web; the Context operator helps the user specify the semantics of the sources by inferring attribute values that may not appear in the source itself, and the Extend operator helps the user find related sources that can be joined to add new attributes to a table. Octopus executes some of these operators automatically, but always allows the user to provide feedback and correct errors. We describe the algorithms underlying each of these operators and experiments that demonstrate their efficacy.",
"title": ""
}
] |
[
{
"docid": "bd19395492dfbecd58f5cfd56b0d00a7",
"text": "The ubiquity of the various cheap embedded sensors on mobile devices, for example cameras, microphones, accelerometers, and so on, is enabling the emergence of participatory sensing applications. While participatory sensing can benefit the individuals and communities greatly, the collection and analysis of the participators' location and trajectory data may jeopardize their privacy. However, the existing proposals mostly focus on participators' location privacy, and few are done on participators' trajectory privacy. The effective analysis on trajectories that contain spatial-temporal history information will reveal participators' whereabouts and the relevant personal privacy. In this paper, we propose a trajectory privacy-preserving framework, named TrPF, for participatory sensing. Based on the framework, we improve the theoretical mix-zones model with considering the time factor from the perspective of graph theory. Finally, we analyze the threat models with different background knowledge and evaluate the effectiveness of our proposal on the basis of information entropy, and then compare the performance of our proposal with previous trajectory privacy protections. The analysis and simulation results prove that our proposal can protect participators' trajectories privacy effectively with lower information loss and costs than what is afforded by the other proposals.",
"title": ""
},
{
"docid": "3c4f19544e9cc51d307c6cc9aea63597",
"text": "Math anxiety is a negative affective reaction to situations involving math. Previous work demonstrates that math anxiety can negatively impact math problem solving by creating performance-related worries that disrupt the working memory needed for the task at hand. By leveraging knowledge about the mechanism underlying the math anxiety-performance relationship, we tested the effectiveness of a short expressive writing intervention that has been shown to reduce intrusive thoughts and improve working memory availability. Students (N = 80) varying in math anxiety were asked to sit quietly (control group) prior to completing difficulty-matched math and word problems or to write about their thoughts and feelings regarding the exam they were about to take (expressive writing group). For the control group, high math-anxious individuals (HMAs) performed significantly worse on the math problems than low math-anxious students (LMAs). In the expressive writing group, however, this difference in math performance across HMAs and LMAs was significantly reduced. Among HMAs, the use of words related to anxiety, cause, and insight in their writing was positively related to math performance. Expressive writing boosts the performance of anxious students in math-testing situations.",
"title": ""
},
{
"docid": "f3b76c5ad1841a56e6950f254eda8b17",
"text": "Due to the complexity of human languages, most of sentiment classification algorithms are suffered from a huge-scale dimension of vocabularies which are mostly noisy and redundant. Deep Belief Networks (DBN) tackle this problem by learning useful information in input corpus with their several hidden layers. Unfortunately, DBN is a time-consuming and computationally expensive process for large-scale applications. In this paper, a semi-supervised learning algorithm, called Deep Belief Networks with Feature Selection (DBNFS) is developed. Using our chi-squared based feature selection, the complexity of the vocabulary input is decreased since some irrelevant features are filtered which makes the learning phase of DBN more efficient. The experimental results of our proposed DBNFS shows that the proposed DBNFS can achieve higher classification accuracy and can speed up training time compared with others well-known semi-supervised learning algorithms.",
"title": ""
},
{
"docid": "dca9a39a9fdf69825ab37196a8b8acea",
"text": "We contrast two seemingly distinct approaches to the task of question answering (QA) using Freebase: one based on information extraction techniques, the other on semantic parsing. Results over the same test-set were collected from two state-ofthe-art, open-source systems, then analyzed in consultation with those systems’ creators. We conclude that the differences between these technologies, both in task performance, and in how they get there, is not significant. This suggests that the semantic parsing community should target answering more compositional open-domain questions that are beyond the reach of more direct information extraction methods.",
"title": ""
},
{
"docid": "49c8cd55ffc5de2fe6064837be2f9816",
"text": "L-theanine acid is an amino acid in tea which affects mental state directly. Along with other most popular tea types; white, green, and black tea, Oolong tea also has sufficient L-theanine to relax the human brain. It apparently can reduce the concern, blood pressure, dissolve the fat in the arteries, and especially slow aging by substances against free radicals. Therefore, this research study about the effect of L-theanine in Oolong Tea on human brain's attention focused on meditation during book reading state rely on each person by using electroencephalograph (EEG) and K-means clustering. An electrophysiological monitoring will properly measure the voltage fluctuation of Alpha rhythm for the understanding of higher attention processes of human brain precisely. K-means clustering investigates and defines that the group of converted waves data has a variable effective level rely on each classified group, which female with lower BMI has a higher effect on L-theanine than male apparently. In conclusion, the results promise the L-theanine significantly affects on meditation by increasing in Alpha waves on each person that beneficially supports production proven of Oolong tea in the future.",
"title": ""
},
{
"docid": "64b9d7fa14bd798f1b5af484d4c209ef",
"text": "Machine learning and AI-assisted trading have attracted growing interest for the past few years. Here, we use this approach to test the hypothesis that the inefficiency of the cryptocurrency market can be exploited to generate abnormal profits. We analyse daily data for 1, 681 cryptocurrencies for the period between Nov. 2015 and Apr. 2018. We show that simple trading strategies assisted by state-of-the-art machine learning algorithms outperform standard benchmarks. Our results show that non-trivial, but ultimately simple, algorithmic mechanisms can help anticipate the short-term evolution of the cryptocurrency market.",
"title": ""
},
{
"docid": "aa9450cdbdb1162015b4d931c32010fb",
"text": "The design of a low-cost rectenna for low-power applications is presented. The rectenna is designed with the use of analytical models and closed-form analytical expressions. This allows for a fast design of the rectenna system. To acquire a small-area rectenna, a layered design is proposed. Measurements indicate the validity range of the analytical models.",
"title": ""
},
{
"docid": "0d75a194abf88a0cbf478869dc171794",
"text": "As a promising way for heterogeneous data analytics, consensus clustering has attracted increasing attention in recent decades. Among various excellent solutions, the co-association matrix based methods form a landmark, which redefines consensus clustering as a graph partition problem. Nevertheless, the relatively high time and space complexities preclude it from wide real-life applications. We, therefore, propose Spectral Ensemble Clustering (SEC) to leverage the advantages of co-association matrix in information integration but run more efficiently. We disclose the theoretical equivalence between SEC and weighted K-means clustering, which dramatically reduces the algorithmic complexity. We also derive the latent consensus function of SEC, which to our best knowledge is the first to bridge co-association matrix based methods to the methods with explicit global objective functions. Further, we prove in theory that SEC holds the robustness, generalizability, and convergence properties. We finally extend SEC to meet the challenge arising from incomplete basic partitions, based on which a row-segmentation scheme for big data clustering is proposed. Experiments on various real-world data sets in both ensemble and multi-view clustering scenarios demonstrate the superiority of SEC to some state-of-the-art methods. In particular, SEC seems to be a promising candidate for big data clustering.",
"title": ""
},
{
"docid": "e640c691a45a5435dcdb7601fb581280",
"text": "We study the problem of response selection for multi-turn conversation in retrieval-based chatbots. The task involves matching a response candidate with a conversation context, the challenges for which include how to recognize important parts of the context, and how to model the relationships among utterances in the context. Existing matching methods may lose important information in contexts as we can interpret them with a unified framework in which contexts are transformed to fixed-length vectors without any interaction with responses before matching. This motivates us to propose a new matching framework that can sufficiently carry important information in contexts to matching and model relationships among utterances at the same time. The new framework, which we call a sequential matching framework (SMF), lets each utterance in a context interact with a response candidate at the first step and transforms the pair to a matching vector. The matching vectors are then accumulated following the order of the utterances in the context with a recurrent neural network (RNN) that models relationships among utterances. Context-response matching is then calculated with the hidden states of the RNN. Under SMF, we propose a sequential convolutional network and sequential attention network and conduct experiments on two public data sets to test their performance. Experiment results show that both models can significantly outperform state-of-the-art matching methods. We also show that the models are interpretable with visualizations that provide us insights on how they capture and leverage important information in contexts for matching.",
"title": ""
},
{
"docid": "6622922fb28cce3df8c68c21ac55e20e",
"text": "Semantic-based approaches are relatively new technologies. Some of these technologies are supported by specifications of W3 Consortium, i.e. RDF, SPARQL and so on. There are many areas where semantic data can be utilized, e.g. social networks, annotation of protein sequences etc. From the physical database design point of view, several index data structures are utilized to handle this data. In many cases, the well-known B-tree is used as a basic index supporting some operations. Since the semantic data are multidimensional, a common way is to use a number of B-trees to index the data. In this article, we review other index data structures; we show that we can create only one index when we utilize a multidimensional data structure like the R-tree. We compare a performance of the B-tree indices with the R-tree and some its variants. Our experiments are performed over a huge semantic database, we show advantages and disadvantages of these data structures.",
"title": ""
},
{
"docid": "3309e09d16e74f87a507181bd82cd7f0",
"text": "The goal of this work is to overview and summarize the grasping taxonomies reported in the literature. Our long term goal is to understand how to reduce mechanical complexity of anthropomorphic hands and still preserve their dexterity. On the basis of a literature survey, 33 different grasp types are taken into account. They were then arranged in a hierarchical manner, resulting in 17 grasp types.",
"title": ""
},
{
"docid": "06003e7812d2fdd8ef71b0bd40af9753",
"text": "Social service professionals are more frequently identifying children who witness adult domestic violence as victims of that abuse. This article expands common definitions of how children witness violence, and adult domestic violence in particular. Over 80 research papers were reviewed and a variety of behavioral, emotional, cognitive and physical functioning problems among children were found to be associated with exposure to domestic violence. Factors that appear to mediate the impact of witnessing violence, such as child gender, age, and time since last exposure to violence are identified. Concerns about research methodology are also raised. CHILDREN’S WITNESSING OF ADULT DOMESTIC VIOLENCE Many people have suggested that family violence – at least to the degree it is observed today – is a recent phenomenon. Yet violence between intimates has long been a part of family life. It has been described repeatedly in religious and historical documents across many centuries, dating as far back as the Roman Empire (Davidson, 1977; Dobash & Dobash, 1979). Some have also argued that current levels of family violence reflect a break-down in the moral structure of the family (see Levine, 1986). This too is unlikely. Rather, as Gordon (1988) suggests, the “ebb-and-flow pattern of concern about family violence...suggests that its incidence has not changed as much as its visibility” (p. 2). Children who witness violence between adults in their homes are only the most recent victims to become visible. These children have been called the “silent,” “forgotten,” and “unintended” victims of adult-to-adult domestic violence (Elbow, 1982; Groves et al., 1993; Rosenbaum & O’Leary, 1981). Studies of archived case records from social service and governmental agencies provide ample evidence that violence has long occurred at levels similar to those measured today and that children are frequently present during violence incidents (Edleson, 1991; Gordon, 1988; Peterson, 1991; Pleck, 1987).",
"title": ""
},
{
"docid": "0103439813a724a3df2e3bd827680abd",
"text": "Unsupervised automatic topic discovery in micro-blogging social networks is a very challenging task, as it involves the analysis of very short, noisy, ungrammatical and uncontextual messages. Most of the current approaches to this problem are basically syntactic, as they focus either on the use of statistical techniques or on the analysis of the co-occurrences between the terms. This paper presents a novel topic discovery methodology, based on the mapping of hashtags to WordNet terms and their posterior clustering, in which semantics plays a centre role. The paper also presents a detailed case study in the field of Oncology, in which the discovered topics are thoroughly compared to a golden standard, showing promising results. 2015 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "caac2672c444172f866e5568bbaee251",
"text": "In the setting of secure multiparty computation, a set of parties with private inputs wish to compute some function of their inputs without revealing anything but their output. Over the last decade, the efficiency of secure two-party computation has advanced in leaps and bounds, with speedups of some orders of magnitude, making it fast enough to be of use in practice. In contrast, progress on the case of multiparty computation (with more than two parties) has been much slower, with very little work being done. Currently, the only implemented efficient multiparty protocol has many rounds of communication (linear in the depth of the circuit being computed) and thus is not suited for Internet-like settings where latency is not very low. In this paper, we construct highly efficient constant-round protocols for the setting of multiparty computation for semi-honest adversaries. Our protocols work by constructing a multiparty garbled circuit, as proposed in BMR (Beaver et al., STOC 1990). Our first protocol uses oblivious transfer and constitutes the first concretely-efficient constant-round multiparty protocol for the case of no honest majority. Our second protocol uses BGW, and is significantly more efficient than the FairplayMP protocol (Ben-David et al., CCS 2008) that also uses BGW.\n We ran extensive experimentation comparing our different protocols with each other and with a highly-optimized implementation of semi-honest GMW. Due to our protocol being constant round, it significantly outperforms GMW in Internet-like settings. For example, with 13 parties situated in the Virginia and Ireland Amazon regions and the SHA256 circuit with 90,000 gates and of depth 4000, the overall running time of our protocol is 25 seconds compared to 335 seconds for GMW. Furthermore, our online time is under half a second compared to 330 seconds for GMW.",
"title": ""
},
{
"docid": "be0806fc3f2f77642f72bbfdc8248f52",
"text": "Transparent electrodes with a dielectric-metal-dielectric (DMD) structure can be implemented in a simple manufacturing process and have good optical and electrical properties. In this study, nickel oxide (NiO) is introduced into the DMD structure as a more appropriate dielectric material that has a high conduction band for electron blocking and a low valence band for efficient hole transport. The indium-free NiO/Ag/NiO (NAN) transparent electrode exhibits an adjustable high transmittance of ∼82% combined with a low sheet resistance of ∼7.6 Ω·s·q(-1) and a work function of 5.3 eV after UVO treatment. The NAN electrode shows excellent surface morphology and good thermal, humidity, and environmental stabilities. Only a small change in sheet resistance can be found after NAN electrode is preserved in air for 1 year. The power conversion efficiencies of organic photovoltaic cells with NAN electrodes deposited on glass and polyethylene terephthalate (PET) substrates are 6.07 and 5.55%, respectively, which are competitive with those of indium tin oxide (ITO)-based devices. Good photoelectric properties, the low-cost material, and the room-temperature deposition process imply that NAN electrode is a striking candidate for low-cost and flexible transparent electrode for efficient flexible optoelectronic devices.",
"title": ""
},
{
"docid": "a81c5da3fc32903dd70e90b020c9394a",
"text": "We build a grammatical error correction (GEC) system primarily based on the state-of-the-art statistical machine translation (SMT) approach, using task-specific features and tuning, and further enhance it with the modeling power of neural network joint models. The SMT-based system is weak in generalizing beyond patterns seen during training and lacks granularity below the word level. To address this issue, we incorporate a character-level SMT component targeting the misspelled words that the original SMT-based system fails to correct. Our final system achieves 53.14% F0.5 score on the benchmark CoNLL-2014 test set, an improvement of 3.62% F0.5 over the best previous published score.",
"title": ""
},
{
"docid": "b8a98eccec1e26ae195463d9754e1278",
"text": "Social sensing is a new big data application paradigm for Cyber-Physical Systems (CPS), where a group of individuals volunteer (or are recruited) to report measurements or observations about the physical world at scale. A fundamental challenge in social sensing applications lies in discovering the correctness of reported observations and reliability of data sources without prior knowledge on either of them. We refer to this problem as truth discovery. While prior studies have made progress on addressing this challenge, two important limitations exist: (i) current solutions did not fully explore the uncertainty aspect of human reported data, which leads to sub-optimal truth discovery results; (ii) current truth discovery solutions are mostly designed as sequential algorithms that do not scale well to large-scale social sensing events. In this paper, we develop a Scalable Uncertainty-Aware Truth Discovery (SUTD) scheme to address the above limitations. The SUTD scheme solves a constraint estimation problem to jointly estimate the correctness of reported data and the reliability of data sources while explicitly considering the uncertainty on the reported data. To address the scalability challenge, the SUTD is designed to run a Graphic Processing Unit (GPU) with thousands of cores, which is shown to run two to three orders of magnitude faster than the sequential truth discovery solutions. In evaluation, we compare our SUTD scheme to the state-of-the-art solutions using three real world datasets collected from Twitter: Paris Attack, Oregon Shooting, and Baltimore Riots, all in 2015. The evaluation results show that our new scheme significantly outperforms the baselines in terms of both truth discovery accuracy and execution time.",
"title": ""
},
{
"docid": "db8297321066523cac75f81345994f34",
"text": "Heterogeneous features of thyroid nodules in ultrasound images is very difficult task when radiologists and physicians manually draw a complete shape of nodule, size and shape, image or distinguish what type of nodule is exist. Segmentation and classification is important methods for medical image processing. Ultrasound imaging is the best way to prediction of which type of thyroid is there. In this paper, uses the groups Benign (non-cancerous) and Malignant (cancerous) Thyroid Nodules images were used. The texture feature method like GLCM are very useful for classifying texture of images and these features are used to train the classifiers such as SVM, KNN and Bayesian. The experimental result shows the performance of the classifiers and shows the best predictive value and positively identify the percentage of the non-cancerous or cancerous people and shows the best performance accuracy using the SVM classifier as compare to the KNN and Bayesian classifier. KeywordsThyroid Ultrasound (US) images, Feature extraction, GLCM, RBAC, SVM, KNN and Bayesian.",
"title": ""
},
{
"docid": "8228886ce1093cd3e3f69cdd7bc6173e",
"text": "Evolutionary-biological reasoning suggests that individuals should be differentially susceptible to environmental influences, with some people being not just more vulnerable than others to the negative effects of adversity, as the prevailing diathesis-stress view of psychopathology (and of many environmental influences) maintains, but also disproportionately susceptible to the beneficial effects of supportive and enriching experiences (or just the absence of adversity). Evidence consistent with the proposition that individuals differ in plasticity is reviewed. The authors document multiple instances in which (a) phenotypic temperamental characteristics, (b) endophenotypic attributes, and (c) specific genes function less like \"vulnerability factors\" and more like \"plasticity factors,\" thereby rendering some individuals more malleable or susceptible than others to both negative and positive environmental influences. Discussion focuses upon limits of the evidence, statistical criteria for distinguishing differential susceptibility from diathesis stress, potential mechanisms of influence, and unknowns in the differential-susceptibility equation.",
"title": ""
},
{
"docid": "dfc618f0ef6497d8ad45aab5396da9db",
"text": "Beginning in the mid-1990s, a number of consultants independently created and evolved what later came to be known as agile software development methodologies. Agile methodologies and practices emerged as an attempt to more formally and explicitly embrace higher rates of change in software requirements and customer expectations. Some prominent agile methodologies are Adaptive Software Development, Crystal, Dynamic Systems Development Method, Extreme Programming (XP), Feature-Driven Development (FDD), Pragmatic Programming, and Scrum. This chapter presents the principles that underlie and unite the agile methodologies. Then, 32 practices used in agile methodologies are presented. Finally, three agile methodologies (XP, FDD, and Scrum) are explained. Most often, software development teams select a subset of the agile practices and create their own hybrid software development methodology rather than strictly adhere to all the practices of a predefined agile methodology. Teams that use primarily agile practices are most often smallto medium-sized, colocated teams working on less complex projects. 1. A gile Origins and Manifesto . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2. A gile and Lean Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 .1.",
"title": ""
}
] |
scidocsrr
|
0407128341e92d2e6f7ff312ae7d1185
|
Fake News or Truth ? Using Satirical Cues to Detect Potentially Misleading News .
|
[
{
"docid": "be3f18e5fbaf3ad45976ca867698a4bc",
"text": "Widespread adoption of internet technologies has changed the way that news is created and consumed. The current online news environment is one that incentivizes speed and spectacle in reporting, at the cost of fact-checking and verification. The line between user generated content and traditional news has also become increasingly blurred. This poster reviews some of the professional and cultural issues surrounding online news and argues for a two-pronged approach inspired by Hemingway’s “automatic crap detector” (Manning, 1965) in order to address these problems: a) proactive public engagement by educators, librarians, and information specialists to promote digital literacy practices; b) the development of automated tools and technologies to assist journalists in vetting, verifying, and fact-checking, and to assist news readers by filtering and flagging dubious information.",
"title": ""
},
{
"docid": "e177c04d8eb729046d368965dbcedd4c",
"text": "This study investigated biased message processing of political satire in The Colbert Report and the influence of political ideology on perceptions of Stephen Colbert. Results indicate that political ideology influences biased processing of ambiguous political messages and source in late-night comedy. Using data from an experiment (N = 332), we found that individual-level political ideology significantly predicted perceptions of Colbert’s political ideology. Additionally, there was no significant difference between the groups in thinking Colbert was funny, but conservatives were more likely to report that Colbert only pretends to be joking and genuinely meant what he said while liberals were more likely to report that Colbert used satire and was not serious when offering political statements. Conservatism also significantly predicted perceptions that Colbert disliked liberalism. Finally, a post hoc analysis revealed that perceptions of Colbert’s political opinions fully mediated the relationship between political ideology and individual-level opinion.",
"title": ""
},
{
"docid": "dba5777004cf43d08a58ef3084c25bd3",
"text": "This paper investigates the problem of automatic humour recognition, and provides and in-depth analysis of two of the most frequently observ ed features of humorous text: human-centeredness and negative polarity. T hrough experiments performed on two collections of humorous texts, we show that th ese properties of verbal humour are consistent across different data s ets.",
"title": ""
}
] |
[
{
"docid": "9380bb09ffc970499931f063008c935f",
"text": "Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8d8e7327f79b256b1ee9dac9a2573b55",
"text": "The objective of this work is set-based face recognition, i.e. to decide if two sets of images of a face are of the same person or not. Conventionally, the set-wise feature descriptor is computed as an average of the descriptors from individual face images within the set. In this paper, we design a neural network architecture that learns to aggregate based on both “visual” quality (resolution, illumination), and “content” quality (relative importance for discriminative classification). To this end, we propose a Multicolumn Network (MN) that takes a set of images (the number in the set can vary) as input, and learns to compute a fix-sized feature descriptor for the entire set. To encourage high-quality representations, each individual input image is first weighted by its “visual” quality, determined by a self-quality assessment module, and followed by a dynamic recalibration based on “content” qualities relative to the other images within the set. Both of these qualities are learnt implicitly during training for setwise classification. Comparing with the previous state-of-the-art architectures trained with the same dataset (VGGFace2), our Multicolumn Networks show an improvement of between 2-6% on the IARPA IJB face recognition benchmarks, and exceed the state of the art for all methods on these benchmarks.",
"title": ""
},
{
"docid": "fc9ddeeae99a4289d5b955c9ba90c682",
"text": "In recent years there have been growing calls for forging greater connections between education and cognitive neuroscience.As a consequence great hopes for the application of empirical research on the human brain to educational problems have been raised. In this article we contend that the expectation that results from cognitive neuroscience research will have a direct and immediate impact on educational practice are shortsighted and unrealistic. Instead, we argue that an infrastructure needs to be created, principally through interdisciplinary training, funding and research programs that allow for bidirectional collaborations between cognitive neuroscientists, educators and educational researchers to grow.We outline several pathways for scaffolding such a basis for the emerging field of ‘Mind, Brain and Education’ to flourish as well as the obstacles that are likely to be encountered along the path.",
"title": ""
},
{
"docid": "ac9996384301546f820afffb4e86114d",
"text": "It is quite possible that ketenimine is formed, as predicted by the authors, in a concerted reaction in the thermolysis of vinyl azide with a barrier of ca. 38 kcal/mol. However, the theoretical treatment is inappropriate as regards the potential vinylnitrene intermediate or transition state, since singlereference methods such as DFT or MP2 cannot provide a correct description. It is now well-known that the lowest singlet state of vinylnitrene is in fact the open-shell singlet (OSS). It cannot be treated adequately by DFT methods, as it requires at least two determinants to be described correctly. Thus, multiconfigurational methods are required. Several such calculations have been reported for the parent vinylnitrene, and substituted vinylnitrenes, but references to these publications are missing in the work of Duarte et al. Indeed, CASPT2 calculations on the relationship between 2Hazirine 2, acetonitrile 3, ketenimine 4, and vinylnitrene 5, have shown that the thermal ring opening of 2 to OSS vinylnitrene 5 has an activation barrier of ca. 33 kcal/mol (Scheme 1). The OSS vinylnitrene 5 undergoes a very facile 1,2-H-shift to acetonitrile with a barrier of ca. 6.5 kcal/mol, whereas the corresponding rearrangement to ketenimine would require ca. 33 kcal/mol. Although it can be debated whether the OSS vinylnitrene 5 is a transition state for ring closure to 2H-azirine without a barrier, or an intermediate with a small barrier of 5 kcal/mol, the important point is that it lies some 25 kcal/mol below the CSS nitrene. Thus, the OSS nitrene needs to be considered in thermal and photochemical chemistry of the C2H3N ensemble. ■ AUTHOR INFORMATION Corresponding Author *C. Wentrup: e-mail, [email protected]. Notes The authors declare no competing financial interest.",
"title": ""
},
{
"docid": "d15dc60ef2fb1e6096a3aba372698fd9",
"text": "One of the most interesting applications of Industry 4.0 paradigm is enhanced process control. Traditionally, process control solutions based on Cyber-Physical Systems (CPS) consider a top-down view where processes are represented as executable high-level descriptions. However, most times industrial processes follow a bottom-up model where processes are executed by low-level devices which are hard-programmed with the process to be executed. Thus, high-level components only may supervise the process execution as devices cannot modify dynamically their behavior. Therefore, in this paper we propose a vertical CPS-based solution (including a reference and a functional architecture) adequate to perform enhanced process control in Industry 4.0 scenarios with a bottom-up view. The proposed solution employs an event-driven service-based architecture where control is performed by means of finite state machines. Furthermore, an experimental validation is provided proving that in more than 97% of cases the proposed solution allows a stable and effective control.",
"title": ""
},
{
"docid": "346e160403ff9eb55c665f6cb8cca481",
"text": "Tasks in visual analytics differ from typical information retrieval tasks in fundamental ways. A critical part of a visual analytics is to ask the right questions when dealing with a diverse collection of information. In this article, we introduce the design and application of an integrated exploratory visualization system called Storylines. Storylines provides a framework to enable analysts visually and systematically explore and study a body of unstructured text without prior knowledge of its thematic structure. The system innovatively integrates latent semantic indexing, natural language processing, and social network analysis. The contributions of the work include providing an intuitive and directly accessible representation of a latent semantic space derived from the text corpus, an integrated process for identifying salient lines of stories, and coordinated visualizations across a spectrum of perspectives in terms of people, locations, and events involved in each story line. The system is tested with the 2006 VAST contest data, in particular, the portion of news articles.",
"title": ""
},
{
"docid": "eb3f72e91f13a3c6faee53c6d4cd4174",
"text": "Recent studies indicate that nearly 75% of queries issued to Web search engines aim at finding information about entities, which are material objects or concepts that exist in the real world or fiction (e.g. people, organizations, products, etc.). Most common information needs underlying this type of queries include finding a certain entity (e.g. “Einstein relativity theory”), a particular attribute or property of an entity (e.g. “Who founded Intel?”) or a list of entities satisfying a certain criteria (e.g. “Formula 1 drivers that won the Monaco Grand Prix”). These information needs can be efficiently addressed by presenting structured information about a target entity or a list of entities retrieved from a knowledge graph either directly as search results or in addition to the ranked list of documents. This tutorial provides a summary of the recent research in knowledge graph entity representation methods and retrieval models. The first part of this tutorial introduces state-of-the-art methods for entity representation, from multi-fielded documents with flat and hierarchical structure to latent dimensional representations based on tensor factorization, while the second part presents recent developments in entity retrieval models, including Fielded Sequential Dependence Model (FSDM) and its parametric extension (PFSDM), as well as entity set expansion and ranking methods.",
"title": ""
},
{
"docid": "a7959808cb41963e8d204c3078106842",
"text": "Human alteration of the global environment has triggered the sixth major extinction event in the history of life and caused widespread changes in the global distribution of organisms. These changes in biodiversity alter ecosystem processes and change the resilience of ecosystems to environmental change. This has profound consequences for services that humans derive from ecosystems. The large ecological and societal consequences of changing biodiversity should be minimized to preserve options for future solutions to global environmental problems.",
"title": ""
},
{
"docid": "20ed67f3f410c3be15c0cabefa4effd8",
"text": "The content selection component of a natural language generation system decides which information should be communicated in its output. We use information from reports on the game of cricket. We first describe a simple factoid-to-text alignment algorithm then treat content selection as a collective classification problem and demonstrate that simple ‘grouping’ of statistics at various levels of granularity yields substantially improved results over a probabilistic baseline. We additionally show that holding back of specific types of input data, and linking database structures with commonality further increase performance.",
"title": ""
},
{
"docid": "60ea79b98eade6b3a7bcd786484aa063",
"text": "This paper analyses the effect of adding Bitcoin, to the portfolio (stocks, bonds, Baltic index, MXEF, gold, real estate and crude oil) of an international investor by using daily data available from 2nd of July, 2010 to 2nd of August, 2016. We conclude that adding Bitcoin to portfolio, over the course of the considered period, always yielded a higher Sharpe ratio. This means that Bitcoin’s returns offset its high volatility. This paper, recognizing the fact that Bitcoin is a relatively new asset class, gives the readers a basic idea about the working of the virtual currency, the increasing number developments in the financial industry revolving around it, its unique features and the detailed look into its continuously growing acceptance across different fronts (Banks, Merchants and Countries) globally. We also construct optimal portfolios to reflect the highly lucrative and largely unexplored opportunities associated with investment in Bitcoin. Keywords—Portfolio management, Bitcoin, optimization, Sharpe ratio.",
"title": ""
},
{
"docid": "3596cd78712e41d5da0b5bfd3e5df4e2",
"text": "In recent years, chip multiprocessors (CMP) have emerged as a solution for high-speed computing demands. However, power dissipation in CMPs can be high if numerous cores are simultaneously active. Dynamic voltage and frequency scaling (DVFS) is widely used to reduce the active power, but its effectiveness and cost depends on the granularity at which it is applied. Per-core DVFS allows the greatest flexibility in controlling power, but incurs the expense of an unrealistically large number of on-chip voltage regulators. Per-chip DVFS, where all cores are controlled by a single regulator overcomes this problem at the expense of greatly reduced flexibility. This work considers the problem of building an intermediate solution, clustering the cores of a multicore processor into DVFS domains and implementing DVFS on a per-cluster basis. Based on a typical workload, we propose a scheme to find similarity among the cores and cluster them based on this similarity. We also provide an algorithm to implement DVFS for the clusters, and evaluate the effectiveness of per-cluster DVFS in power reduction.",
"title": ""
},
{
"docid": "9d940e3fb357cfe03f0b206f816ea34f",
"text": "Plagiarism can be of many different natures, ranging from copying texts to adopting ideas, without giving credit to its originator. This paper presents a new taxonomy of plagiarism that highlights differences between literal plagiarism and intelligent plagiarism, from the plagiarist's behavioral point of view. The taxonomy supports deep understanding of different linguistic patterns in committing plagiarism, for example, changing texts into semantically equivalent but with different words and organization, shortening texts with concept generalization and specification, and adopting ideas and important contributions of others. Different textual features that characterize different plagiarism types are discussed. Systematic frameworks and methods of monolingual, extrinsic, intrinsic, and cross-lingual plagiarism detection are surveyed and correlated with plagiarism types, which are listed in the taxonomy. We conduct extensive study of state-of-the-art techniques for plagiarism detection, including character n-gram-based (CNG), vector-based (VEC), syntax-based (SYN), semantic-based (SEM), fuzzy-based (FUZZY), structural-based (STRUC), stylometric-based (STYLE), and cross-lingual techniques (CROSS). Our study corroborates that existing systems for plagiarism detection focus on copying text but fail to detect intelligent plagiarism when ideas are presented in different words.",
"title": ""
},
{
"docid": "0a143c2d4af3cc726964a90927556399",
"text": "Humans prefer to interact with each other using speech. Since this is the most natural mode of communication, the humans also want to interact with machines using speech only. So, automatic speech recognition has gained a lot of popularity. Different approaches for speech recognition exists like Hidden Markov Model (HMM), Dynamic Time Warping (DTW), Vector Quantization (VQ), etc. This paper uses Neural Network (NN) along with Mel Frequency Cepstrum Coefficients (MFCC) for speech recognition. Mel Frequency Cepstrum Coefiicients (MFCC) has been used for the feature extraction of speech. This gives the feature of the waveform. For pattern matching FeedForward Neural Network with Back propagation algorithm has been applied. The paper analyzes the various training algorithms present for training the Neural Network and uses train scg for the experiment. The work has been done on MATLAB and experimental results show that system is able to recognize words at sufficiently high accuracy.",
"title": ""
},
{
"docid": "8917629470087a3b7a03b99d461cb63c",
"text": "In this paper, the crucial ingredients for our submission to SemEval-2014 Task 4 “Aspect Level Sentiment Analysis” are discussed. We present a simple aspect detection algorithm, a co-occurrence based method for category detection and a dictionary based sentiment classification algorithm. The dictionary for the latter is based on co-occurrences as well. The failure analysis and related work section focus mainly on the category detection method as it is most distinctive for our work.",
"title": ""
},
{
"docid": "59d39dd0a5535be81c695a7fbd4005c1",
"text": "Over the last decade, accumulating evidence has suggested a causative link between mitochondrial dysfunction and major phenotypes associated with aging. Somatic mitochondrial DNA (mtDNA) mutations and respiratory chain dysfunction accompany normal aging, but the first direct experimental evidence that increased mtDNA mutation levels contribute to progeroid phenotypes came from the mtDNA mutator mouse. Recent evidence suggests that increases in aging-associated mtDNA mutations are not caused by damage accumulation, but rather are due to clonal expansion of mtDNA replication errors that occur during development. Here we discuss the caveats of the traditional mitochondrial free radical theory of aging and highlight other possible mechanisms, including insulin/IGF-1 signaling (IIS) and the target of rapamycin pathways, that underlie the central role of mitochondria in the aging process.",
"title": ""
},
{
"docid": "a9a65ee9ac1469b24e8900de01eb8b19",
"text": "The lung has significant susceptibility to injury from a variety of chemotherapeutic agents. The clinician must be familiar with classic chemotherapeutic agents with well-described pulmonary toxicities and must also be vigilant about a host of new agents that may exert adverse effects on lung function. The diagnosis of chemotherapy-associated lung disease remains an exclusionary process, particularly with respect to considering usual and atypical infections, as well as recurrence of the underlying neoplastic process in these immune compromised patients. In many instances, chemotherapy-associated lung disease may respond to withdrawal of the offending agent and to the judicious application of corticosteroid therapy.",
"title": ""
},
{
"docid": "0ed509f632d855591cd1d350accfbed3",
"text": "Development of efficient nanoparticles (NPs) for cancer therapy remains a challenge. NPs are required to have high stability, uniform size, sufficient drug loading, targeting capability, and ability to overcome drug resistance. In this study, the development of a NP formulation that can meet all these challenging requirements for targeted glioblastoma multiform (GBM) therapy is reported. This multifunctional NP is composed of a polyethylene glycol-coated magnetic iron oxide NP conjugated with cyclodextrin and chlorotoxin (CTX) and loaded with fluorescein and paclitaxel (PTX) (IONP-PTX-CTX-FL). The physicochemical properties of the IONP-PTX-CTX-FL are characterized by transmission electron microscope, dynamic light scattering, and high-performance liquid chromatography. The cellular uptake of NPs is studied using flow cytometry and confocal microscopy. Cell viability and apoptosis are assessed with the Alamar Blue viability assay and flow cytometry, respectively. The IONP-PTX-CTX-FL had a uniform size of ≈44 nm and high stability in cell culture medium. Importantly, the presence of CTX on NPs enhanced the uptake of the NPs by GBM cells and improved the efficacy of PTX in killing both GBM and GBM drug-resistant cells. The IONP-PTX-CTX-FL demonstrated its great potential for brain cancer therapy and may also be used to deliver PTX to treat other cancers.",
"title": ""
},
{
"docid": "b4462bf06bac13af9e40023019619a78",
"text": "Successful schools ensure that all students master basic skills such as reading and math and have strong backgrounds in other subject areas, including science, history, and foreign language. Recently, however, educators and parents have begun to support a broader educational agenda – one that enhances teachers’ and students’ social and emotional skills. Research indicates that social and emotional skills are associated with success in many areas of life, including effective teaching, student learning, quality relationships, and academic performance. Moreover, a recent meta-analysis of over 300 studies showed that programs designed to enhance social and emotional learning significantly improve students’ social and emotional competencies as well as academic performance. Incorporating social and emotional learning programs into school districts can be challenging, as programs must address a variety of topics in order to be successful. One organization, the Collaborative for Academic, Social, and Emotional Learning (CASEL), provides leadership for researchers, educators, and policy makers to advance the science and practice of school-based social and emotional learning programs. According to CASEL, initiatives to integrate programs into schools should include training on social and emotional skills for both teachers and students, and should receive backing from all levels of the district, including the superintendent, school principals, and teachers. Additionally, programs should be field-tested, evidence-based, and founded on sound",
"title": ""
},
{
"docid": "acc65b944e6c678fbd080c06c78ad497",
"text": "Cue-triggered recall of learned temporal sequences is an important cognitive function that has been attributed to higher brain areas. Here recordings in both anesthetized and awake rats demonstrate that after repeated stimulation with a moving spot that evoked sequential firing of an ensemble of primary visual cortex (V1) neurons, just a brief flash at the starting point of the motion path was sufficient to evoke a sequential firing pattern that reproduced the activation order evoked by the moving spot. The speed of recalled spike sequences may reflect the internal dynamics of the network rather than the motion speed. In awake rats, such recall was observed during a synchronized ('quiet wakeful') brain state having large-amplitude, low-frequency local field potential (LFP) but not in a desynchronized ('active') state having low-amplitude, high-frequency LFP. Such conditioning-enhanced, cue-evoked sequential spiking of a V1 ensemble may contribute to experience-based perceptual inference in a brain state–dependent manner.",
"title": ""
}
] |
scidocsrr
|
ec943c96ae901006d23e2af49a7284cc
|
Modeling Semantic Relevance for Question-Answer Pairs in Web Social Communities
|
[
{
"docid": "68693c88cb62ce28514344d15e9a6f09",
"text": "New types of document collections are being developed by various web services. The service providers keep track of non-textual features such as click counts. In this paper, we present a framework to use non-textual features to predict the quality of documents. We also show our quality measure can be successfully incorporated into the language modeling-based retrieval model. We test our approach on a collection of question and answer pairs gathered from a community based question answering service where people ask and answer questions. Experimental results using our quality measure show a significant improvement over our baseline.",
"title": ""
},
{
"docid": "1ce647f5e36c07745c512ed856a9d517",
"text": "This paper describes a discussion-bot that provides answers to students' discussion board questions in an unobtrusive and human-like way. Using information retrieval and natural language processing techniques, the discussion-bot identifies the questioner's interest, mines suitable answers from an annotated corpus of 1236 archived threaded discussions and 279 course documents and chooses an appropriate response. A novel modeling approach was designed for the analysis of archived threaded discussions to facilitate answer extraction. We compare a self-out and an all-in evaluation of the mined answers. The results show that the discussion-bot can begin to meet students' learning requests. We discuss directions that might be taken to increase the effectiveness of the question matching and answer extraction algorithms. The research takes place in the context of an undergraduate computer science course.",
"title": ""
}
] |
[
{
"docid": "8988a648262b396bf20489eb92f32110",
"text": "Hyaluronic acid (HA), the main component of extracellular matrix, is considered one of the key players in the tissue regeneration process. It has been proven to modulate via specific HA receptors, inflammation, cellular migration, and angiogenesis, which are the main phases of wound healing. Studies have revealed that most HA properties depend on its molecular size. High molecular weight HA displays anti-inflammatory and immunosuppressive properties, whereas low molecular weight HA is a potent proinflammatory molecule. In this review, the authors summarize the role of HA polymers of different molecular weight in tissue regeneration and provide a short overview of main cellular receptors involved in HA signaling. In addition, the role of HA in 2 major steps of wound healing is examined: inflammation and the angiogenesis process. Finally, the antioxidative properties of HA are discussed and its possible clinical implication presented.",
"title": ""
},
{
"docid": "97578b3a8f5f34c96e7888f273d4494f",
"text": "We analyze the use, advantages, and drawbacks of graph kernels in chemoin-formatics, including a comparison of kernel-based approaches with other methodology, as well as examples of applications. Kernel-based machine learning [1], now widely applied in chemoinformatics, delivers state-of-the-art performance [2] in tasks like classification and regression. Molecular graph kernels [3] are a recent development where kernels are defined directly on the molecular structure graph. This allows the adaptation of methods from graph theory to structure graphs and their direct use with kernel learning algorithms. The main advantage of kernel learning, the so-called “kernel trick”, allows for a systematic, computationally feasible, and often globally optimal search for non-linear patterns, as well as the direct use of non-numerical inputs such as strings and graphs. A drawback is that solutions are expressed indirectly in terms of similarity to training samples, and runtimes that are typically quadratic or cubic in the number of training samples. Graph kernels [3] are positive semidefinite functions defined directly on graphs. The most important types are based on random walks, subgraph patterns, optimal assignments, and graphlets. Molecular structure graphs have strong properties that can be exploited [4], e.g., they are undirected, have no self-loops and no multiple edges, are connected (except for salts), annotated, often planar in the graph-theoretic sense, and their vertex degree is bounded by a small constant. In many applications, they are small. Many graph kernels are generalpurpose, some are suitable for structure graphs, and a few have been explicitly designed for them. We present three exemplary applications of the iterative similarity optimal assignment kernel [5], which was designed for the comparison of small structure graphs: The discovery of novel agonists of the peroxisome proliferator-activated receptor g [6] (ligand-based virtual screening), the estimation of acid dissociation constants [7] (quantitative structure-property relationships), and molecular de novo design [8].",
"title": ""
},
{
"docid": "9eb0d79f9c13f30f53fb7214b337880d",
"text": "Many real world problems can be solved with Artificial Neural Networks in the areas of pattern recognition, signal processing and medical diagnosis. Most of the medical data set is seldom complete. Artificial Neural Networks require complete set of data for an accurate classification. This paper dwells on the various missing value techniques to improve the classification accuracy. The proposed system also investigates the impact on preprocessing during the classification. A classifier was applied to Pima Indian Diabetes Dataset and the results were improved tremendously when using certain combination of preprocessing techniques. The experimental system achieves an excellent classification accuracy of 99% which is best than before.",
"title": ""
},
{
"docid": "7604942913928dfb0e0ef486eccbcf8b",
"text": "We connect two scenarios in structured learning: adapting a parser trained on one corpus to another annotation style, and projecting syntactic annotations from one language to another. We propose quasisynchronous grammar (QG) features for these structured learning tasks. That is, we score a aligned pair of source and target trees based on local features of the trees and the alignment. Our quasi-synchronous model assigns positive probability to any alignment of any trees, in contrast to a synchronous grammar, which would insist on some form of structural parallelism. In monolingual dependency parser adaptation, we achieve high accuracy in translating among multiple annotation styles for the same sentence. On the more difficult problem of cross-lingual parser projection, we learn a dependency parser for a target language by using bilingual text, an English parser, and automatic word alignments. Our experiments show that unsupervised QG projection improves on parses trained using only highprecision projected annotations and far outperforms, by more than 35% absolute dependency accuracy, learning an unsupervised parser from raw target-language text alone. When a few target-language parse trees are available, projection gives a boost equivalent to doubling the number of target-language trees. ∗The first author would like to thank the Center for Intelligent Information Retrieval at UMass Amherst. We would also like to thank Noah Smith and Rebecca Hwa for helpful discussions and the anonymous reviewers for their suggestions for improving the paper.",
"title": ""
},
{
"docid": "479e962b8ed5d1b8f03280b209c27249",
"text": "A feedforward network is proposed which lends itself to cost-effective implementations in digital hardware and has a fast forward-pass capability. It differs from the conventional model in restricting its synapses to the set {−1, 0, 1} while allowing unrestricted offsets. Simulation results on the ‘onset of diabetes’ data set and a handwritten numeral recognition database indicate that the new network, despite having strong constraints on its synapses, has a generalization performance similar to that of its conventional counterpart. I. Hardware Implementation Ease of hardware implementation is the key feature that distinguishes the feedforward network from competing statistical and machine learning techniques. The most distinctive characteristic of the graph of that network is its homogeneous modularity. Because of its modular architecture, the natural implementation of this network is a parallel one, whether in software or in hardware. The digital, electronic implementation holds considerable interest – the modular architecture of the feedforward network is well matched with VLSI design tools and therefore lends itself to cost-effective mass production. There is, however, a hitch which makes this union between the feedforward network and digital hardware far from ideal: the network parameters (weights) and its internal functions (dot product, activation functions) are inherently analog. It is too much to expect a network trained in an analog (or high-resolution digital) environment to behave satisfactorily when transplanted into typically low-resolution hardware. Use of the digital approximation of a continuous activation function, and/or range-limiting of weights should, in general, lead to an unsatisfactory approximation. The solution to this problem may lie in a bottom-up approach – instead of trying to fit a trained, but inherently analog network in digital hardware, train the network in such a way that it is suitable for direct digital implementation after training. This approach is the basis of the network proposed here. This network, with synapses from {−1, 0, 1} and continuous offsets, can be formed without using a conventional multiplier. This reduction in complexity, plus the fact that all synapses require no more than a single bit each for storage, makes these networks very attractive. It is possible that the severity of the {−1, 0, 1} restric1Offsets are also known as thresholds as well as biases. 2A zero-valued synapse indicates the absence of a synapse! tion may weaken the approximation capability of this network, however our experiments on classification tasks indicate otherwise. Comfort is also provided by a result on approximation in C(R) [4]. That result, the Multiplier-Free Network (MFN) existence theorem, guarantees that networks with input-layer synapses from the set {−1, 1}, no output-layer synapses, unrestricted offsets, and a single hidden layer of neurons requiring only sign adjustment, addition, and hyperbolic tangent activation functions, can approximate all functions of one variable with any desired accuracy. The constraints placed upon the network weights may result in an increase in the necessary number of hidden neurons required to achieve a given degree of accuracy on most learning tasks. It should also be noted that the hardware implementation benefits are valid only when the MFN has been trained, as the learning task still requires high-resolution arithmetic. This makes the MFN unsuitable for in-situ learning. Moreover, high-resolution offsets and activation function are required during training and for the trained network. II. Approximation in C(R) Consider the function f̂ :",
"title": ""
},
{
"docid": "49b550eb7f99baef1f9accd9da9a26f4",
"text": "Answer selection is an essential step in a question answering (QA) system. Traditional methods for this task mainly focus on developing linguistic features that are limited in practice. With the great success of deep learning method in distributed text representation, deep learning-based answer selection approaches have been well investigated, which mainly employ only one neural network, i.e., convolutional neural network (CNN) or long short term memory (LSTM), leading to failures in extracting some rich sentence features. Thus, in this paper, we propose a collaborative learning-based answer selection model (QA-CL), where we deploy a parallel training architecture to collaboratively learn the initial word vector matrix of the sentence by CNN and bidirectional LSTM (BiLSTM) at the same time. In addition, we extend our model by incorporating the sentence embedding generated by the QA-CL model into a joint distributed sentence representation using a strong unsupervised baseline weight removal (WR), i.e., the QA-CLWR model. We evaluate our proposals on a popular QA dataset, InsuranceQA. The experimental results indicate that our proposed answer selection methods can produce a better performance compared with several strong baselines. Finally, we investigate the models’ performance with respect to different question types and find that question types with a medium number of questions have a better and more stable performance than those types with too large or too small number of questions.",
"title": ""
},
{
"docid": "b447aec2deaa67788560efe1d136be31",
"text": "This paper addresses the design, construction and control issues of a novel biomimetic robotic dolphin equipped with mechanical flippers, based on an engineered propulsive model. The robotic dolphin is modeled as a three-segment organism composed of a rigid anterior body, a flexible rear body and an oscillating fluke. The dorsoventral movement of the tail produces the thrust and bending of the anterior body in the horizontal plane enables turning maneuvers. A dualmicrocontroller structure is adopted to drive the oscillating multi-link rear body and the mechanical flippers. Experimental results primarily confirm the effectiveness of the dolphin-like movement in propulsion and maneuvering.",
"title": ""
},
{
"docid": "a4037343fa0df586946d8034b0bf8a5b",
"text": "Security researchers are applying software reliability models to vulnerability data, in an attempt to model the vulnerability discovery process. I show that most current work on these vulnerability discovery models (VDMs) is theoretically unsound. I propose a standard set of definitions relevant to measuring characteristics of vulnerabilities and their discovery process. I then describe the theoretical requirements of VDMs and highlight the shortcomings of existing work, particularly the assumption that vulnerability discovery is an independent process.",
"title": ""
},
{
"docid": "8e794530be184686a49e5ced6ac6521d",
"text": "A key feature of the immune system is its ability to induce protective immunity against pathogens while maintaining tolerance towards self and innocuous environmental antigens. Recent evidence suggests that by guiding cells to and within lymphoid organs, CC-chemokine receptor 7 (CCR7) essentially contributes to both immunity and tolerance. This receptor is involved in organizing thymic architecture and function, lymph-node homing of naive and regulatory T cells via high endothelial venules, as well as steady state and inflammation-induced lymph-node-bound migration of dendritic cells via afferent lymphatics. Here, we focus on the cellular and molecular mechanisms that enable CCR7 and its two ligands, CCL19 and CCL21, to balance immunity and tolerance.",
"title": ""
},
{
"docid": "83dec7aa3435effc3040dfb08cb5754a",
"text": "This paper examines the relationship between annual report readability and firm performance and earnings persistence. This is motivated by the Securities and Exchange Commission’s plain English disclosure regulations that attempt to make corporate disclosures easier to read for ordinary investors. I measure the readability of public company annual reports using both the Fog Index from computational linguistics and the length of the document. I find that the annual reports of firms with lower earnings are harder to read (i.e., they have higher Fog and are longer). Moreover, the positive earnings of firms with annual reports that are easier to read are more persistent. This suggests that managers may be opportunistically choosing the readability of annual reports to hide adverse information from investors.",
"title": ""
},
{
"docid": "9a5f5df096ad76798791e7bebd6f8c93",
"text": "Organisational Communication, in today’s organizations has not only become far more complex and varied but has become an important factor for overall organizational functioning and success. The way the organization communicates with its employees is reflected in morale, motivation and performance of the employees. The objective of the present paper is to explore the interrelationship between communication and motivation and its overall impact on employee performance. The paper focuses on the fact that communication in the workplace can take many forms and has a lasting effect on employee motivation. If employees feel that communication from management is effective, it can lead to feelings of job satisfaction, commitment to the organisation and increased trust in the workplace. This study was conducted through a comprehensive review and critical analysis of the research and literature focused upon the objectives of the paper. It also enumerates the results of a study of organizational communication and motivational practices followed at a large manufacturing company, Vanaz Engineers Ltd., based at Pune, to support the hypothesis propounded in the paper.",
"title": ""
},
{
"docid": "29152062efc341bf3ce55d41cf13bdcf",
"text": "In this report we discuss the findings of a Web-based questionnaire aimed at discovering both patterns of use of videoconferencing systems within HP and the reasons people give for either not using, or for using such systems. The primary motivation was to understand these issues for the purpose of designing new kinds of technology to support remote work rather than as an investigation into HP’s internal processes. The questionnaire, filled out via the Web by 4532 people across HP, showed that most participants (68%) had not taken part in a videoconference within the last 3 years, and only 3% of the sample were frequent users. Of those who had used videoconference systems, the main benefits were perceived to be the ability to: see people they had never met before, see facial expressions and gestures, and follow conversations with multiple participants more easily. The main problems that users of videoconference technology perceived were: the high overhead of setting up and planning videoconferencing meetings, a lack of a widespread base of users, the perception that videoconference technology did not add value over existing communication tools, and quality and reliability issues. Non-users indicated that the main barriers were lack of access to videoconference facilities and tools and a perception that they did not need to use this tool because other tools were satisfactory. The findings from this study in a real work setting are related to findings in the research literature, and implications for system design and research are identified.",
"title": ""
},
{
"docid": "61eb4d0961242bd1d1e59d889a84f89d",
"text": "Understanding and forecasting the health of an online community is of great value to its owners and managers who have vested interests in its longevity and success. Nevertheless, the association between community evolution and the behavioural patterns and trends of its members is not clearly understood, which hinders our ability of making accurate predictions of whether a community is flourishing or diminishing. In this paper we use statistical analysis, combined with a semantic model and rules for representing and computing behaviour in online communities. We apply this model on a number of forum communities from Boards.ie to categorise behaviour of community members over time, and report on how different behaviour compositions correlate with positive and negative community growth in these forums.",
"title": ""
},
{
"docid": "d7fd9c273c0b26a309b84e0d99143557",
"text": "Remote sensing is one of the most common ways to extract relevant information about Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earth's surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR), and material content (multispectral and hyperspectral) of the objects in the image. Once considered together their complementarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion), among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the data fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications, and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges?",
"title": ""
},
{
"docid": "49e2963e84967100deee8fc810e053ba",
"text": "We have developed a method for rigidly aligning images of tubes. This paper presents an evaluation of the consistency of that method for three-dimensional images of human vasculature. Vascular images may contain alignment ambiguities, poorly corresponding vascular networks, and non-rigid deformations, yet the Monte Carlo experiments presented in this paper show that our method registers vascular images with sub-voxel consistency in a matter of seconds. Furthermore, we show that the method's insensitivity to non-rigid deformations enables the localization, quantification, and visualization of those deformations. Our method aligns a source image with a target image by registering a model of the tubes in the source image directly with the target image. Time can be spent to extract an accurate model of the tubes in the source image. Multiple target images can then be registered with that model without additional extractions. Our registration method builds upon the principles of our tubular object segmentation work that combines dynamic-scale central ridge traversal with radius estimation. In particular, our registration method's consistency stems from incorporating multi-scale ridge and radius measures into the model-image match metric. Additionally, the method's speed is due in part to the use of coarse-to-fine optimization strategies that are enabled by measures made during model extraction and by the parameters inherent to the model-image match metric.",
"title": ""
},
{
"docid": "a3421349059058a0c62105951e46435e",
"text": "It has been proposed that two amino acid substitutions in the transcription factor FOXP2 have been positively selected during human evolution due to effects on aspects of speech and language. Here, we introduce these substitutions into the endogenous Foxp2 gene of mice. Although these mice are generally healthy, they have qualitatively different ultrasonic vocalizations, decreased exploratory behavior and decreased dopamine concentrations in the brain suggesting that the humanized Foxp2 allele affects basal ganglia. In the striatum, a part of the basal ganglia affected in humans with a speech deficit due to a nonfunctional FOXP2 allele, we find that medium spiny neurons have increased dendrite lengths and increased synaptic plasticity. Since mice carrying one nonfunctional Foxp2 allele show opposite effects, this suggests that alterations in cortico-basal ganglia circuits might have been important for the evolution of speech and language in humans.",
"title": ""
},
{
"docid": "ee3d837390e1f53181cfb393a0af3cc6",
"text": "The telecommunications industry is highly competitive, which means that the mobile providers need a business intelligence model that can be used to achieve an optimal level of churners, as well as a minimal level of cost in marketing activities. Machine learning applications can be used to provide guidance on marketing strategies. Furthermore, data mining techniques can be used in the process of customer segmentation. The purpose of this paper is to provide a detailed analysis of the C.5 algorithm, within naive Bayesian modelling for the task of segmenting telecommunication customers behavioural profiling according to their billing and socio-demographic aspects. Results have been experimentally implemented.",
"title": ""
},
{
"docid": "9ecd46e90ccd1db7daef14dd63fea8ee",
"text": "HISTORY AND EXAMINATION — A 13-year-old Caucasian boy (BMI 26.4 kg/m) presented with 3 weeks’ history of polyuria, polydipsia, and weight loss. His serum glucose (26.8 mmol/l), HbA1c (9.4%, normal 3.2–5.5) and fructosamine (628 mol/l, normal 205–285) levels were highly elevated (Fig. 1), and urinalysis showed glucosuria ( ) and ketonuria ( ) . He was HLA-DRB1* 0101,*0901, DRB4*01, DQA1*0101,03, and DQB1*0303,0501. Plasma Cpeptide, determined at a blood glucose of 17.0 mmol/l, was low (0.18 nmol/l). His previous history was unremarkable, and he did not take any medication. The patient received standard treatment with insulin, fluid, and electrolyte replacement and diabetes education. After an uneventful clinical course he was discharged on multiple-injection insulin therapy (total 0.9 units kg 1 day ) after 10 days. Subsequently, insulin doses were gradually reduced to 0.3 units kg 1 day , and insulin treatment was completely stopped after 11 months. Without further treatment, HbA1c and fasting glucose levels remained normal throughout the entire follow-up of currently 4.5 years. During oral glucose tolerance testing performed 48 months after diagnosis, he had normal fasting and 2-h levels of glucose (3.7 and 5.6 mmol/l, respectively), insulin (60.5 and 217.9 pmol/l, respectively), and C-peptide (0.36 and 0.99 nmol/l, respectively). His insulin sensitivity, as determined by insulin sensitivity index (composite) and homeostasis model assessment, was normal, and BMI remained unchanged. Serum autoantibodies to GAD65, insulin autoantibody-2, insulin, and islet cell antibodies were initially positive but showed a progressive decline or loss during follow-up. INVESTIGATION — T-cell antigen recognition and cytokine profiles were studied using a library of 21 preproinsulin (PPI) peptides (2). In the patient’s peripheral blood mononuclear cells (PBMCs), a high cumulative interleukin (IL)-10) secretion (201 pg/ml) was observed in response to PPI peptides, with predominant recognition of PPI44–60 and PPI49–65, while interferon (IFN)secretion was undetectable. In contrast, in PBMCs from a cohort of 12 type 1 diabetic patients without long-term remission (2), there was a dominant IFNresponse but low IL-10 secretion to PPI. Analysis of CD4 T–helper cell subsets revealed that IL-10 secretion was mostly attributable to the patient’s naı̈ve/recently activated CD45RA cells, while a strong IFNresponse was observed in CD45RA cells. CD45RA T-cells have been associated with regulatory T-cell function in diabetes, potentially capable of suppressing",
"title": ""
},
{
"docid": "15ddb8cb5e82e0efde197908420bb8d0",
"text": "In recent years, there has been much interest in learning Bayesian networks from data. Learning such models is desirable simply because there is a wide array of off-the-shelf tools that can apply the learned models as expert systems, diagnosis engines, and decision support systems. Practitioners also claim that adaptive Bayesian networks have advantages in their own right as a non-parametric method for density estimation, data analysis, pattern classification, and modeling. Among the reasons cited we find: their semantic clarity and understandability by humans, the ease of acquisition and incorporation of prior knowledge, the ease of integration with optimal decision-making methods, the possibility of causal interpretation of learned models, and the automatic handling of noisy and missing data. In spite of these claims, and the initial success reported recently, methods that learn Bayesian networks have yet to make the impact that other techniques such as neural networks and hidden Markov models have made in applications such as pattern and speech recognition. In this paper, we challenge the research community to identify and characterize domains where induction of Bayesian networks makes the critical difference, and to quantify the factors that are responsible for that difference. In addition to formalizing the challenge, we identify research problems whose solution is, in our view, crucial for meeting this challenge.",
"title": ""
}
] |
scidocsrr
|
e449b1286c75aa09e23e96de39f4f155
|
New image descriptors based on color, texture, shape, and wavelets for object and scene image classification
|
[
{
"docid": "9b3adf0f3c15a42ac1ee82a38d451988",
"text": "Four novel color Local Binary Pattern (LBP) descriptors are presented in this paper for scene image and image texture classification with applications to image search and retrieval. The oRGB-LBP descriptor is derived by concatenating the LBP features of the component images in the oRGB color space. The Color LBP Fusion (CLF) descriptor is constructed by integrating the LBP descriptors from different color spaces; the Color Grayscale LBP Fusion (CGLF) descriptor is derived by integrating the grayscale-LBP descriptor and the CLF descriptor; and the CGLF+PHOG descriptor is obtained by integrating the Pyramid of Histogram of Orientation Gradients (PHOG) and the CGLF descriptor. Feature extraction applies the Enhanced Fisher Model (EFM) and image classification is based on the nearest neighbor classification rule (EFM-NN). The proposed image descriptors and the feature extraction and classification methods are evaluated using three grand challenge databases and are shown to improve upon the classification performance of existing methods.",
"title": ""
},
{
"docid": "305efd1823009fe79c9f8ff52ddb5724",
"text": "We explore the problem of classifying images by the object categories they contain in the case of a large number of object categories. To this end we combine three ingredients: (i) shape and appearance representations that support spatial pyramid matching over a region of interest. This generalizes the representation of Lazebnik et al., (2006) from an image to a region of interest (ROI), and from appearance (visual words) alone to appearance and local shape (edge distributions); (ii) automatic selection of the regions of interest in training. This provides a method of inhibiting background clutter and adding invariance to the object instance 's position; and (iii) the use of random forests (and random ferns) as a multi-way classifier. The advantage of such classifiers (over multi-way SVM for example) is the ease of training and testing. Results are reported for classification of the Caltech-101 and Caltech-256 data sets. We compare the performance of the random forest/ferns classifier with a benchmark multi-way SVM classifier. It is shown that selecting the ROI adds about 5% to the performance and, together with the other improvements, the result is about a 10% improvement over the state of the art for Caltech-256.",
"title": ""
}
] |
[
{
"docid": "b80291b00c462e094389bdcede4b7ad8",
"text": "The availability of large datsets has enabled neural networks to achieve impressive recognition results. However, the presence of inaccurate class labels is known to deteriorate the performance of even the best classifiers in a broad range of classification problems. Noisy labels also tend to be more harmful than noisy attributes. When the observed label is noisy, we can view the correct label as a latent random variable and model the noise processes by a communication channel with unknown parameters. Thus we can apply the EM algorithm to find the parameters of both the network and the noise and estimate the correct label. In this study we present a neural-network approach that optimizes the same likelihood function as optimized by the EM algorithm. The noise is explicitly modeled by an additional softmax layer that connects the correct labels to the noisy ones. This scheme is then extended to the case where the noisy labels are dependent on the features in addition to the correct labels. Experimental results demonstrate that this approach outperforms previous methods.",
"title": ""
},
{
"docid": "9a914020b22011255f8c69e29a718667",
"text": "Software engineering practitioners often spend significant amount of time and effort to debug. To help practitioners perform this crucial task, hundreds of papers have proposed various fault localization techniques. Fault localization helps practitioners to find the location of a defect given its symptoms (e.g., program failures). These localization techniques have pinpointed the locations of bugs of various systems of diverse sizes, with varying degrees of success, and for various usage scenarios. Unfortunately, it is unclear whether practitioners appreciate this line of research. To fill this gap, we performed an empirical study by surveying 386 practitioners from more than 30 countries across 5 continents about their expectations of research in fault localization. In particular, we investigated a number of factors that impact practitioners' willingness to adopt a fault localization technique. We then compared what practitioners need and the current state-of-research by performing a literature review of papers on fault localization techniques published in ICSE, FSE, ESEC-FSE, ISSTA, TSE, and TOSEM in the last 5 years (2011-2015). From this comparison, we highlight the directions where researchers need to put effort to develop fault localization techniques that matter to practitioners.",
"title": ""
},
{
"docid": "06b6f659fe422410d65081735ad2d16a",
"text": "BACKGROUND\nImproving survival and extending the longevity of life for all populations requires timely, robust evidence on local mortality levels and trends. The Global Burden of Disease 2015 Study (GBD 2015) provides a comprehensive assessment of all-cause and cause-specific mortality for 249 causes in 195 countries and territories from 1980 to 2015. These results informed an in-depth investigation of observed and expected mortality patterns based on sociodemographic measures.\n\n\nMETHODS\nWe estimated all-cause mortality by age, sex, geography, and year using an improved analytical approach originally developed for GBD 2013 and GBD 2010. Improvements included refinements to the estimation of child and adult mortality and corresponding uncertainty, parameter selection for under-5 mortality synthesis by spatiotemporal Gaussian process regression, and sibling history data processing. We also expanded the database of vital registration, survey, and census data to 14 294 geography-year datapoints. For GBD 2015, eight causes, including Ebola virus disease, were added to the previous GBD cause list for mortality. We used six modelling approaches to assess cause-specific mortality, with the Cause of Death Ensemble Model (CODEm) generating estimates for most causes. We used a series of novel analyses to systematically quantify the drivers of trends in mortality across geographies. First, we assessed observed and expected levels and trends of cause-specific mortality as they relate to the Socio-demographic Index (SDI), a summary indicator derived from measures of income per capita, educational attainment, and fertility. Second, we examined factors affecting total mortality patterns through a series of counterfactual scenarios, testing the magnitude by which population growth, population age structures, and epidemiological changes contributed to shifts in mortality. Finally, we attributed changes in life expectancy to changes in cause of death. We documented each step of the GBD 2015 estimation processes, as well as data sources, in accordance with Guidelines for Accurate and Transparent Health Estimates Reporting (GATHER).\n\n\nFINDINGS\nGlobally, life expectancy from birth increased from 61·7 years (95% uncertainty interval 61·4-61·9) in 1980 to 71·8 years (71·5-72·2) in 2015. Several countries in sub-Saharan Africa had very large gains in life expectancy from 2005 to 2015, rebounding from an era of exceedingly high loss of life due to HIV/AIDS. At the same time, many geographies saw life expectancy stagnate or decline, particularly for men and in countries with rising mortality from war or interpersonal violence. From 2005 to 2015, male life expectancy in Syria dropped by 11·3 years (3·7-17·4), to 62·6 years (56·5-70·2). Total deaths increased by 4·1% (2·6-5·6) from 2005 to 2015, rising to 55·8 million (54·9 million to 56·6 million) in 2015, but age-standardised death rates fell by 17·0% (15·8-18·1) during this time, underscoring changes in population growth and shifts in global age structures. The result was similar for non-communicable diseases (NCDs), with total deaths from these causes increasing by 14·1% (12·6-16·0) to 39·8 million (39·2 million to 40·5 million) in 2015, whereas age-standardised rates decreased by 13·1% (11·9-14·3). Globally, this mortality pattern emerged for several NCDs, including several types of cancer, ischaemic heart disease, cirrhosis, and Alzheimer's disease and other dementias. By contrast, both total deaths and age-standardised death rates due to communicable, maternal, neonatal, and nutritional conditions significantly declined from 2005 to 2015, gains largely attributable to decreases in mortality rates due to HIV/AIDS (42·1%, 39·1-44·6), malaria (43·1%, 34·7-51·8), neonatal preterm birth complications (29·8%, 24·8-34·9), and maternal disorders (29·1%, 19·3-37·1). Progress was slower for several causes, such as lower respiratory infections and nutritional deficiencies, whereas deaths increased for others, including dengue and drug use disorders. Age-standardised death rates due to injuries significantly declined from 2005 to 2015, yet interpersonal violence and war claimed increasingly more lives in some regions, particularly in the Middle East. In 2015, rotaviral enteritis (rotavirus) was the leading cause of under-5 deaths due to diarrhoea (146 000 deaths, 118 000-183 000) and pneumococcal pneumonia was the leading cause of under-5 deaths due to lower respiratory infections (393 000 deaths, 228 000-532 000), although pathogen-specific mortality varied by region. Globally, the effects of population growth, ageing, and changes in age-standardised death rates substantially differed by cause. Our analyses on the expected associations between cause-specific mortality and SDI show the regular shifts in cause of death composition and population age structure with rising SDI. Country patterns of premature mortality (measured as years of life lost [YLLs]) and how they differ from the level expected on the basis of SDI alone revealed distinct but highly heterogeneous patterns by region and country or territory. Ischaemic heart disease, stroke, and diabetes were among the leading causes of YLLs in most regions, but in many cases, intraregional results sharply diverged for ratios of observed and expected YLLs based on SDI. Communicable, maternal, neonatal, and nutritional diseases caused the most YLLs throughout sub-Saharan Africa, with observed YLLs far exceeding expected YLLs for countries in which malaria or HIV/AIDS remained the leading causes of early death.\n\n\nINTERPRETATION\nAt the global scale, age-specific mortality has steadily improved over the past 35 years; this pattern of general progress continued in the past decade. Progress has been faster in most countries than expected on the basis of development measured by the SDI. Against this background of progress, some countries have seen falls in life expectancy, and age-standardised death rates for some causes are increasing. Despite progress in reducing age-standardised death rates, population growth and ageing mean that the number of deaths from most non-communicable causes are increasing in most countries, putting increased demands on health systems.\n\n\nFUNDING\nBill & Melinda Gates Foundation.",
"title": ""
},
{
"docid": "909e55c3359543bf7ed3e5659d7cc27f",
"text": "We study the link between family violence and the emotional cues associated with wins and losses by professional football teams. We hypothesize that the risk of violence is affected by the “gain-loss” utility of game outcomes around a rationally expected reference point. Our empirical analysis uses police reports of violent incidents on Sundays during the professional football season. Controlling for the pregame point spread and the size of the local viewing audience, we find that upset losses (defeats when the home team was predicted to win by four or more points) lead to a 10% increase in the rate of at-home violence by men against their wives and girlfriends. In contrast, losses when the game was expected to be close have small and insignificant effects. Upset wins (victories when the home team was predicted to lose) also have little impact on violence, consistent with asymmetry in the gain-loss utility function. The rise in violence after an upset loss is concentrated in a narrow time window near the end of the game and is larger for more important games. We find no evidence for reference point updating based on the halftime score.",
"title": ""
},
{
"docid": "f154b293b364498f228c71af14813ad2",
"text": "advantage of array antenna structures to better process the incoming signals. They also have the ability to identify multiple targets. This paper explores the eigen-analysis category of super resolution algorithm. A class of Multiple Signal Classification (MUSIC) algorithms known as a root-MUSIC algorithm is presented in this paper. The root-MUSIC method is based on the eigenvectors of the sensor array correlation matrix. It obtains the signal estimation by examining the roots of the spectrum polynomial. The peaks in the spectrum space correspond to the roots of the polynomial lying close to the unit circle. Statistical analysis of the performance of the processing algorithm and processing resource requirements are discussed in this paper. Extensive computer simulations are used to show the performance of the algorithms.",
"title": ""
},
{
"docid": "ba2e16103676fa57bc3ca841106d2d32",
"text": "The purpose of this study was to investigate the effect of the ultrasonic cavitation versus low level laser therapy in the treatment of abdominal adiposity in female post gastric bypass. Subjects: Sixty female suffering from localized fat deposits at the abdomen area after gastric bypass were divided randomly and equally into three equal groups Group (1): were received low level laser therapy plus bicycle exercises and abdominal exercises for 3 months, Group (2): were received ultrasonic cavitation therapy plus bicycle exercises and abdominal exercises for 3 months, and Group (3): were received bicycle exercises and abdominal exercises for 3 months. Methods: data were obtained for each patient from waist circumferences, skin fold and ultrasonography measurements were done after six weeks postoperative (preexercise) and at three months postoperative. The physical therapy program began, six weeks postoperative for experimental group. Including aerobic exercises performed on the stationary bicycle, for 30 min, 3 sessions per week for three months Results: showed a statistically significant decrease in waist circumferences, skin fold and ultrasonography measurements in the three groups, with a higher rate of reduction in Group (1) and Group (2) .Also there was a non-significant difference between Group (1) and Group (2). Conclusion: these results suggested that bothlow level laser therapy and ultrasonic cavitation had a significant effect on abdominal adiposity after gastric bypass in female.",
"title": ""
},
{
"docid": "50dc3186ad603ef09be8cca350ff4d77",
"text": "Design iteration time in SoC design flow is reduced through performance exploration at a higher level of abstraction. This paper proposes an accurate and fast performance analysis method in early stage of design process using a behavioral model written in C/C++ language. We made a cycle-accurate but fast and flexible compiled instruction set simulator (ISS) and IP models that represent hardware functionality and performance. System performance analyzer configured by the target communication architecture analyzes the performance utilizing event-traces obtained by running the ISS and IP models. This solution is automated and implemented in the tool, HIPA. We obtain diverse performance profiling results and achieve 95% accuracy using an abstracted C model. We also achieve about 20 times speed-up over corresponding co-simulation tools.",
"title": ""
},
{
"docid": "596bb1265a375c68f0498df90f57338e",
"text": "The concept of unintended pregnancy has been essential to demographers in seeking to understand fertility, to public health practitioners in preventing unwanted childbear-ing and to both groups in promoting a woman's ability to determine whether and when to have children. Accurate measurement of pregnancy intentions is important in understanding fertility-related behaviors, forecasting fertility, estimating unmet need for contraception, understanding the impact of pregnancy intentions on maternal and child health, designing family planning programs and evaluating their effectiveness, and creating and evaluating community-based programs that prevent unintended pregnancy. 1 Pregnancy unintendedness is a complex concept, and has been the subject of recent conceptual and method-ological critiques. 2 Pregnancy intentions are increasingly viewed as encompassing affective, cognitive, cultural and contextual dimensions. Developing a more complete understanding of pregnancy intentions should advance efforts to increase contraceptive use, to prevent unintended pregnancies and to improve the health of women and their children. To provide a scientific foundation for public health efforts to prevent unintended pregnancy, we conducted a review of unintended pregnancy between the fall of 1999 and the spring of 2001 as part of strategic planning activities within the Division of Reproductive Health at the Centers for Disease Control and Prevention (CDC). We reviewed the published and unpublished literature, consulted with experts in reproductive health and held several joint meetings with the Demographic and Behavioral Research Branch of the National Institute of Child Health and Human Development , and the Office of Population Affairs of the Department of Health and Human Services. We used standard scientific search engines, such as Medline, to find relevant articles published since 1975, and identified older references from bibliographies contained in recent articles; academic experts and federal officials helped to identify unpublished reports. This comment summarizes our findings and incorporates insights gained from the joint meetings and the strategic planning process. CURRENT DEFINITIONS AND MEASURES Conventional measures of unintended pregnancy are designed to reflect a woman's intentions before she became pregnant. 3 Unintended pregnancies are pregnancies that are reported to have been either unwanted (i.e., they occurred when no children, or no more children, were desired) or mistimed (i.e., they occurred earlier than desired). In contrast, pregnancies are described as intended if they are reported to have happened at the \" right time \" 4 or later than desired (because of infertility or difficulties in conceiving). A concept related to unintended pregnancy is unplanned pregnancy—one that occurred when …",
"title": ""
},
{
"docid": "3edec34d6438a7eddee9f0e5a6e7cd6c",
"text": "Multi-atlas segmentation approach is one of the most widely-used image segmentation techniques in biomedical applications. There are two major challenges in this category of methods, i.e., atlas selection and label fusion. In this paper, we propose a novel multi-atlas segmentation method that formulates multi-atlas segmentation in a deep learning framework for better solving these challenges. The proposed method, dubbed deep fusion net (DFN), is a deep architecture that integrates a feature extraction subnet and a non-local patch-based label fusion (NL-PLF) subnet in a single network. The network parameters are learned by end-to-end training for automatically learning deep features that enable optimal performance in a NL-PLF framework. The learned deep features are further utilized in defining a similarity measure for atlas selection. By evaluating on two public cardiac MR datasets of SATA-13 and LV-09 for left ventricle segmentation, our approach achieved 0.833 in averaged Dice metric (ADM) on SATA-13 dataset and 0.95 in ADM for epicardium segmentation on LV-09 dataset, comparing favorably with the other automatic left ventricle segmentation methods. We also tested our approach on Cardiac Atlas Project (CAP) testing set of MICCAI 2013 SATA Segmentation Challenge, and our method achieved 0.815 in ADM, ranking highest at the time of writing.",
"title": ""
},
{
"docid": "01534202e7db5d9059651290e1720bf0",
"text": "The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average/max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across variou s CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin.",
"title": ""
},
{
"docid": "7cfc5f4f7a2d76f4d80e3dc6a15313ee",
"text": "Using projection mapping enables us to bring virtual worlds into shared physical spaces. In this paper, we present a novel, adaptable and real-time projection mapping system, which supports multiple projectors and high quality rendering of dynamic content on surfaces of complex geometrical shape. Our system allows for smooth blending across multiple projectors using a new optimization framework that simulates the diffuse direct light transport of the physical world to continuously adapt the color output of each projector pixel. We present a real-time solution to this optimization problem using off-the-shelf graphics hardware, depth cameras and projectors. Our approach enables us to move projectors, depth camera or objects while maintaining the correct illumination, in realtime, without the need for markers on the object. It also allows for projectors to be removed or dynamically added, and provides compelling results with only commodity hardware.",
"title": ""
},
{
"docid": "84a7ef0d27649619119892c6c91cf63c",
"text": "As the most-studied form of leadership across disciplines in both Western and Chinese contexts, transformational school leadership has the potential to suit diverse national and cultural contexts. Given the growing evidence showing the positive effects of transformational leadership on various school outcomes as it relates to school environment, teacher and student achievement, we wanted to explore the factors that gave rise to transformational leadership. The purpose of this study was to identify and compare the antecedents fostering transformational leadership in the contexts of both the United States and China. This paper reviews and discusses the empirical studies of the last two decades, concentrating on the variables that are antecedent to transformational leadership mainly in the educational context, but also in public management, business and psychology. Results show that transformational leadership is related to three sets of antecedents, which include: (1) the leader’s qualities (e.g., self-efficacy, values, traits, emotional intelligence); (2) organizational features (e.g., organization fairness); and (3) the leader’s colleagues’ characteristics (e.g., follower’s initial developmental level). Some antecedents were common to both contexts, while other antecedents appeared to be national context specific. The implications of the findings for future research and leader preparation in different national contexts are discussed.",
"title": ""
},
{
"docid": "b77af68695ad7b5f0f2e4519013aae04",
"text": "Because topical compounds based on insecticidal chemicals are the mainstay of head lice treatment, but resistance is increasing, alternatives, such as herbs and oils are being sold to treat head lice. To test a commercial shampoo based on seed extract of Azadirachta indica (neem tree) for its in vitro effect, head lice (n=17) were collected from school children in Australia and immersed in Wash-Away Louse™ shampoo (Alpha-Biocare GmbH, Germany). Vitality was evaluated for more than 3 h by examination under a dissecting microscope. Positive and negative controls were a commercially available head lice treatment containing permethrin 1% (n=19) and no treatment (n=14). All lice treated with the neem shampoo did not show any vital signs from the initial examination after immersion at 5–30 min; after 3 h, only a single louse showed minor signs of life, indicated by gut movements, a mortality of 94%. In the permethrin group, mortality was 20% at 5 min, 50% at 15 min, and 74% after 3 h. All 14 head lice of the negative control group survived during the observation period. Our data show that Wash-Away Louse™ is highly effective in vitro against head lice. The neem shampoo was more effective than the permethrin-based product. We speculate that complex plant-based compounds will replace the well-defined chemical pediculicides if resistance to the commonly used products further increases.",
"title": ""
},
{
"docid": "1ddfd2f44ed394318454b071124d423d",
"text": "Urban growth along the middle section of the ancient silk-road of China (so called West Yellow River Corridor—He-Xi Corridor) has taken a unique path deviating from what is commonly seen in the coastal China. Urban growth here has been driven by historical heritage, transportation connection between East and West China, and mineral exploitation. However, it has been constrained by water shortage and harsh natural environment because this region is located in arid and semi-arid climate zones. This paper attempts to construct a multi-city agent-based model to explore possible trajectories of regional urban growth along the entire He-Xi Corridor under a severe environment risk, over urban growth under an extreme threat of water shortage. In contrast with current ABM approaches, our model will simulate urban growth in a large administrative region consisting of a system of cities. It simultaneously considers the spatial variations of these cities in terms of population size, development history, water resource endowment and sustainable development potential. It also explores potential impacts of exogenous inter-city interactions on future urban growth on the basis of urban gravity model. The algorithmic foundations of three types of agents, developers, conservationists and regional-planners, are discussed. Simulations with regard to three different development scenarios are presented and analyzed.",
"title": ""
},
{
"docid": "a4bf72d9f3bb455cf701f40b5ed8d9ba",
"text": "Patients are often overwhelmed in their efforts to understand their illnesses and determine what actions to take. In this paper, we want to show why care is sometimes not co-managed well between clinicians and patients, and the necessary information is often not well coordinated. Through a 2.5-year field study of an adult bone marrow transplant (BMT) clinic, we show there are different experiences of temporal ordering, or temporalities, between clinicians and patients (and their caregivers). We also show that misalignments between these temporalities can seriously affect the articulation (coordination) and information work that must go on for people to co-manage their conditions with clinicians. As one example, information flows can be misaligned, as a result of differing temporalities, causing sometimes an overwhelming amount of information to be presented and sometimes a lack of properly contextualized information. We also argue that these misalignments in temporalities, important in medicine, are a general coordination problem.",
"title": ""
},
{
"docid": "ed9a02a856782a89476bcf233f4c9488",
"text": "This paper examines the role of IT in developing collaborative consumption. We present a study of the multi-sided platform goCatch, which is widely recognized as a mobile application and digital disruptor in the Australian transport industry. From our investigation, we find that goCatch uses IT to create situational-based and object-based opportunities to enable collaborative consumption and in turn digital disruption to the incumbent industry. We also highlight the factors to consider in developing a mobile application to connect with customers, and serve as a viable competitive option for responding to competition. Such research is necessary in order to better understand how service providers extract business value from digital technologies to formulate new breakthrough strategies, design compelling new products and services, and transform management processes. Ongoing work will reveal how m-commerce service providers can extract business value from a collaborative consumption model.",
"title": ""
},
{
"docid": "e78f8f96af1c589487273c1fecfa0f7c",
"text": "BACKGROUND\nAtrial fibrillation (AFib) is the most common form of heart arrhythmia and a potent risk factor for stroke. Nonvitamin K antagonist oral anticoagulants (NOACs) are routinely prescribed to manage AFib stroke risk; however, nonadherence to treatment is a concern. Additional tools that support self-care and medication adherence may benefit patients with AFib.\n\n\nOBJECTIVE\nThe aim of this study was to evaluate the perceived usability and usefulness of a mobile app designed to support self-care and treatment adherence for AFib patients who are prescribed NOACs.\n\n\nMETHODS\nA mobile app to support AFib patients was previously developed based on early stage interview and usability test data from clinicians and patients. An exploratory pilot study consisting of naturalistic app use, surveys, and semistructured interviews was then conducted to examine patients' perceptions and everyday use of the app.\n\n\nRESULTS\nA total of 12 individuals with an existing diagnosis of nonvalvular AFib completed the 4-week study. The average age of participants was 59 years. All participants somewhat or strongly agreed that the app was easy to use, and 92% (11/12) reported being satisfied or very satisfied with the app. Participant feedback identified changes that may improve app usability and usefulness for patients with AFib. Areas of usability improvement were organized by three themes: app navigation, clarity of app instructions and design intent, and software bugs. Perceptions of app usefulness were grouped by three key variables: core needs of the patient segment, patient workflow while managing AFib, and the app's ability to support the patient's evolving needs.\n\n\nCONCLUSIONS\nThe results of this study suggest that mobile tools that target self-care and treatment adherence may be helpful to AFib patients, particularly those who are newly diagnosed. Additionally, participant feedback provided insight into the varied needs and health experiences of AFib patients, which may improve the design and targeting of the intervention. Pilot studies that qualitatively examine patient perceptions of usability and usefulness are a valuable and often underutilized method for assessing the real-world acceptability of an intervention. Additional research evaluating the AFib Connect mobile app over a longer period, and including a larger, more diverse sample of AFib patients, will be helpful for understanding whether the app is perceived more broadly to be useful and effective in supporting patient self-care and medication adherence.",
"title": ""
},
{
"docid": "8f3b18f410188ae4f7b09435ce92639e",
"text": "Biogenic amines are important nitrogen compounds of biological importance in vegetable, microbial and animal cells. They can be detected in both raw and processed foods. In food microbiology they have sometimes been related to spoilage and fermentation processes. Some toxicological characteristics and outbreaks of food poisoning are associated with histamine and tyramine. Secondary amines may undergo nitrosation and form nitrosamines. A better knowledge of the factors controlling their formation is necessary in order to improve the quality and safety of food.",
"title": ""
},
{
"docid": "d06acdee303eb1831151362b278c1762",
"text": "Universal language representation is the holy grail in machine translation (MT). Thanks to the new neural MT approach, it seems that there are good perspectives towards this goal. In this paper, we propose a new architecture based on combining variational autoencoders with encoder-decoders, and introducing an interlingual loss as an additional training objective. By adding and forcing this interlingual loss, we are able to train multiple encoders and decoders for each language, sharing a common universal representation. Since the final objective of this universal representation is producing close results for similar input sentences (in any language), we propose to evaluate it by encoding the same sentence in two different languages, decoding both latent representations into the same language and comparing both outputs. Preliminary results on the WMT 2017 Turkish/English task shows that the proposed architecture is capable of learning a universal language representation and simultaneously training both translation directions with state-of-the-art results.",
"title": ""
}
] |
scidocsrr
|
bba5f478e30f6366a1c764db4a6fc4a6
|
One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL
|
[
{
"docid": "10ae6cdb445e4faf1e6bed5cad6eb3ba",
"text": "It this paper we revisit the fast stylization method introduced in Ulyanov et al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will be made available at https://github.com/DmitryUlyanov/texture_nets.",
"title": ""
},
{
"docid": "c6238089da4208841ac6f4f92748be8c",
"text": "In order for a robot to be a generalist that can perform a wide range of jobs, it must be able to acquire a wide variety of skills quickly and efficiently in complex unstructured environments. High-capacity models such as deep neural networks can enable a robot to represent complex skills, but learning each skill from scratch then becomes infeasible. In this work, we present a meta-imitation learning method that enables a robot to learn how to learn more efficiently, allowing it to acquire new skills from just a single demonstration. Unlike prior methods for one-shot imitation, our method can scale to raw pixel inputs and requires data from significantly fewer prior tasks for effective learning of new skills. Our experiments on both simulated and real robot platforms demonstrate the ability to learn new tasks, end-to-end, from a single visual demonstration.",
"title": ""
},
{
"docid": "aea68ebce25f10c9b630e155e201e698",
"text": "Maintaining accurate world knowledge in a complex and changing environment is a perennial problem for robots and other artificial intelligence systems. Our architecture for addressing this problem, called Horde, consists of a large number of independent reinforcement learning sub-agents, or demons. Each demon is responsible for answering a single predictive or goal-oriented question about the world, thereby contributing in a factored, modular way to the system’s overall knowledge. The questions are in the form of a value function, but each demon has its own policy, reward function, termination function, and terminal-reward function unrelated to those of the base problem. Learning proceeds in parallel by all demons simultaneously so as to extract the maximal training information from whatever actions are taken by the system as a whole. Gradient-based temporal-difference learning methods are used to learn efficiently and reliably with function approximation in this off-policy setting. Horde runs in constant time and memory per time step, and is thus suitable for learning online in realtime applications such as robotics. We present results using Horde on a multi-sensored mobile robot to successfully learn goal-oriented behaviors and long-term predictions from offpolicy experience. Horde is a significant incremental step towards a real-time architecture for efficient learning of general knowledge from unsupervised sensorimotor interaction.",
"title": ""
},
{
"docid": "503af27bc7de93815010aefbae4a20ed",
"text": "This work shows that policies with simple linear and RBF parameterizations can be trained to solve a variety of widely studied continuous control tasks, including the gym-v1 benchmarks. The performance of these trained policies are competitive with state of the art results, obtained with more elaborate parameterizations such as fully connected neural networks. Furthermore, the standard training and testing scenarios for these tasks are shown to be very limited and prone to over-fitting, thus giving rise to only trajectory-centric policies. Training with a diverse initial state distribution induces more global policies with better generalization. This allows for interactive control scenarios where the system recovers from large on-line perturbations; as shown in the supplementary video.",
"title": ""
}
] |
[
{
"docid": "9a9be12c677356314b8466b430b83546",
"text": "Reality-based modeling of vibrations has been used to enhance the haptic display of virtual environments for impact events such as tapping, although the bandwidths of many haptic displays make it difficult to accurately replicate the measured vibrations. We propose modifying reality-based vibration parameters through a series of perceptual experiments with a haptic display. We created a vibration feedback model, a decaying sinusoidal waveform, by measuring the acceleration of the stylus of a three degree-of-freedom haptic display as a human user tapped it on several real materials. For some materials, the measured parameters (amplitude, frequency, and decay rate) were greater than the bandwidth of the haptic display; therefore, the haptic device was not capable of actively displaying all the vibration models. A series of perceptual experiments, where human users rated the realism of various parameter combinations, were performed to further enhance the realism of the vibration display for impact events given these limitations. The results provided different parameters than those derived strictly from acceleration data. Additional experiments verified the effectiveness of these modified model parameters by showing that users could differentiate between materials in a virtual environment.",
"title": ""
},
{
"docid": "5277cdcfb9352fa0e8cf08ff723d34c6",
"text": "Extractive style query oriented multi document summariza tion generates the summary by extracting a proper set of sentences from multiple documents based on the pre given query. This paper proposes a novel multi document summa rization framework via deep learning model. This uniform framework consists of three parts: concepts extraction, summary generation, and reconstruction validation, which work together to achieve the largest coverage of the docu ments content. A new query oriented extraction technique is proposed to concentrate distributed information to hidden units layer by layer. Then, the whole deep architecture is fi ne tuned by minimizing the information loss of reconstruc tion validation. According to the concentrated information, dynamic programming is used to seek most informative set of sentences as the summary. Experiments on three bench mark datasets demonstrate the effectiveness of the proposed framework and algorithms.",
"title": ""
},
{
"docid": "0b146cb20ed80b17f607251fba7e25d7",
"text": "Presence is widely accepted as the key concept to be considered in any research involving human interaction with Virtual Reality (VR). Since its original description, the concept of presence has developed over the past decade to be considered by many researchers as the essence of any experience in a virtual environment. The VR generating systems comprise two main parts: a technological component and a psychological experience. The different relevance given to them produced two different but coexisting visions of presence: the rationalist and the psychological/ecological points of view. The rationalist point of view considers a VR system as a collection of specific machines with the necessity of the inclusion of the concept of presence. The researchers agreeing with this approach describe the sense of presence as a function of the experience of a given medium (Media Presence). The main result of this approach is the definition of presence as the perceptual illusion of non-mediation produced by means of the disappearance of the medium from the conscious attention of the subject. At the other extreme, there is the psychological or ecological perspective (Inner Presence). Specifically, this perspective considers presence as a neuropsychological phenomenon, evolved from the interplay of our biological and cultural inheritance, whose goal is the control of the human activity. Given its key role and the rate at which new approaches to understanding and examining presence are appearing, this chapter draws together current research on presence to provide an up to date overview of the most widely accepted approaches to its understanding and measurement.",
"title": ""
},
{
"docid": "97a1453d230df4f8c57eed1d3a1aaa19",
"text": "In this letter, an isolation improvement method between two closely packed planar inverted-F antennas (PIFAs) is proposed via a miniaturized ground slot with a chip capacitor. The proposed T-shaped ground slot acts as a notch filter, and the capacitor is utilized to reduce the slot length. The equivalent circuit model of the proposed slot with the capacitor is derived. The measured isolation between two PIFAs is down to below -20 dB at the whole WLAN band of 2.4 GHz.",
"title": ""
},
{
"docid": "b730eb83f78fc9fb0466d9ea0e123451",
"text": "Control-Flow Integrity (CFI) is a software-hardening technique. It inlines checks into a program so that its execution always follows a predetermined Control-Flow Graph (CFG). As a result, CFI is effective at preventing control-flow hijacking attacks. However, past fine-grained CFI implementations do not support separate compilation, which hinders its adoption.\n We present Modular Control-Flow Integrity (MCFI), a new CFI technique that supports separate compilation. MCFI allows modules to be independently instrumented and linked statically or dynamically. The combined module enforces a CFG that is a combination of the individual modules' CFGs. One challenge in supporting dynamic linking in multithreaded code is how to ensure a safe transition from the old CFG to the new CFG when libraries are dynamically linked. The key technique we use is to have the CFG represented in a runtime data structure and have reads and updates of the data structure wrapped in transactions to ensure thread safety. Our evaluation on SPECCPU2006 benchmarks shows that MCFI supports separate compilation, incurs low overhead of around 5%, and enhances security.",
"title": ""
},
{
"docid": "afe00d3f8364159d77611582c611e981",
"text": "Today, in addition to traditional mobile services, there are new ones already being used, thanks to the advances in 3G-related technologies. Our work contributed to the emerging body of research by integrating TAM and Diffusion Theory. Based on a sample of 542 Dutch consumers, we found that traditional antecedents of behavioral intention, ease of use and perceived usefulness, can be linked to diffusion-related variables, such as social influence and perceived benefits (flexibility and status). 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "98ff207ca344eb058c6bf7ba87751822",
"text": "Ultra-wideband radar is an excellent tool for nondestructive examination of walls and highway structures. Therefore often steep edged narrow pulses with rise-, fall-times in the range of 100 ps are used. For digitizing of the reflected pulses a down conversion has to be accomplished. A new low cost sampling down converter with a sampling phase detector for use in ultra-wideband radar applications is presented.",
"title": ""
},
{
"docid": "c7f6a99df60e96c98862e366c4bc3646",
"text": "Doppio is a reconfigurable smartwatch with two touch sensitive display faces. The orientation of the top relative to the base and how the top is attached to the base, creates a very large interaction space. We define and enumerate possible configurations, transitions, and manipulations in this space. Using a passive prototype, we conduct an exploratory study to probe how people might use this style of smartwatch interaction. With an instrumented prototype, we conduct a controlled experiment to evaluate the transition times between configurations and subjective preferences. We use the combined results of these two studies to generate a set of characteristics and design considerations for applying this interaction space to smartwatch applications. These considerations are illustrated with a proof-of-concept hardware prototype demonstrating how Doppio interactions can be used for notifications, private viewing, task switching, temporary information access, application launching, application modes, input, and sharing the top.",
"title": ""
},
{
"docid": "45addba115a5046a9840daf2860e8ddc",
"text": "This paper investigates the use of Doppler radar sensor for occupancy monitoring. The feasibility of true presence is explored with Doppler radar occupancy sensors to overcome the limitations of the common occupancy sensors. The common occupancy sensors are more of a motion sensor than a presence detector. Existing cost effective off the shelf System-on-Chip CC2530 RF transceiver is used for developing the radio. The transmitter sends continuous wave signal at 2.405 GHz. Different levels of activity is detected by post-processing sensor signals. Heart and respiratory signals are extracted in order to improve stationary subject detection.",
"title": ""
},
{
"docid": "52a4af83304ad0a5fe3a77dfdfdabb6a",
"text": "Discovering semantic coherent topics from the large amount of user-generated content (UGC) in social media would facilitate many downstream applications of intelligent computing. Topic models, as one of the most powerful algorithms, have been widely used to discover the latent semantic patterns in text collections. However, one key weakness of topic models is that they need documents with certain length to provide reliable statistics for generating coherent topics. In Twitter, the users’ tweets are mostly short and noisy. Observations of word co-occurrences are incomprehensible for topic models. To deal with this problem, previous work tried to incorporate prior knowledge to obtain better results. However, this strategy is not practical for the fast evolving UGC in Twitter. In this paper, we first cluster the users according to the retweet network, and the users’ interests are mined as the prior knowledge. Such data are then applied to improve the performance of topic learning. The potential cause for the effectiveness of this approach is that users in the same community usually share similar interests, which will result in less noisy sub-data sets. Our algorithm pre-learns two types of interest knowledge from the data set: the interest-word-sets and a tweet-interest preference matrix. Furthermore, a dedicated background model is introduced to judge whether a word is drawn from the background noise. Experiments on two real life twitter data sets show that our model achieves significant improvements over state-of-the-art baselines.",
"title": ""
},
{
"docid": "b6ae6ee48c9ddc2c18a194f53917a79e",
"text": "Modern information systems produce tremendous amounts of event data. The area of process mining deals with extracting knowledge from this data. Real-life processes can be effectively discovered, analyzed and optimized with the help of mature process mining techniques. There is a variety of process mining case studies and experience reports from such business areas as healthcare, public, transportation and education. Although nowadays, these techniques are mostly used for discovering business processes.\n The goal of this industrial paper is to show that process mining can be applied to software too. Here we present and analyze our experiences on applying process mining in different productive software systems used in the touristic domain. Process models and user interface workflows underlie the functional specifications of the systems we experiment with. When the systems are utilized, user interaction is recorded in event logs. After applying process mining methods to these logs, process and user interface flow models are automatically derived. These resulting models provide insight regarding the real usage of the software, motivate the changes in the functional specifications, enable usability improvements and software redesign.\n Thus, with the help of our examples we demonstrate that process mining facilitates new forms of software analysis. The user interaction with almost every software system can be mined in order to improve the software and to monitor and measure its real usage.",
"title": ""
},
{
"docid": "5d5c036d03bd15688fec89c5af9dfbd8",
"text": "OBJECTIVE\nThis study evaluated the efficacy and tolerability of desvenlafaxine succinate (desvenlafaxine) in the treatment of major depressive disorder (MDD).\n\n\nMETHOD\nIn this 8-week, multicenter, randomized, double-blind, placebo-controlled trial, adult outpatients (aged 18-75 years) with a primary diagnosis of MDD (DSM-IV criteria) were randomly assigned to treatment with desvenlafaxine (100-200 mg/day) or placebo. The primary outcome measure was the 17-item Hamilton Rating Scale for Depression (HAM-D(17)) score at final on-therapy evaluation. The Clinical Global Impressions-Improvement scale (CGI-I) was the key secondary measure. Other secondary measures included the Montgomery-Asberg Depression Rating Scale (MADRS), Clinical Global Impressions-Severity of Illness scale, Visual Analog Scale-Pain Intensity (VAS-PI) overall and subcomponent scores, and HAM-D(17) response and remission rates. The study was conducted from June 2003 to May 2004.\n\n\nRESULTS\nOf the 247 patients randomly assigned to treatment, 234 comprised the intent-to-treat population. Following titration, mean daily desvenlafaxine doses ranged from 179 to 195 mg/day. At endpoint, there were no significant differences in scores between the desvenlafaxine (N = 120) and placebo (N = 114) groups on the HAM-D(17) or CGI-I. However, the desvenlafaxine group had significantly greater improvement in MADRS scores (p = .047) and in VAS-PI overall pain (p = .008), back pain (p = .006), and arm, leg, or joint pain (p < .001) scores than the placebo group. The most common treatment-emergent adverse events (at least 10% and twice the rate of placebo) were nausea, dry mouth, constipation, anorexia, somnolence, and nervousness.\n\n\nCONCLUSION\nDesvenlafaxine was generally safe and well tolerated. In this study, it did not show significantly greater efficacy than placebo on the primary or key secondary efficacy endpoints, but it did demonstrate efficacy on an alternate depression scale and pain measure associated with MDD.\n\n\nCLINICAL TRIALS REGISTRATION\nClinicalTrials.gov identifier NCT00063206.",
"title": ""
},
{
"docid": "01eadabcfbe9274c47d9ebcd45ea2332",
"text": "The classical uncertainty principle provides a fundamental tradeoff in the localization of a signal in the time and frequency domains. In this paper we describe a similar tradeoff for signals defined on graphs. We describe the notions of “spread” in the graph and spectral domains, using the eigenvectors of the graph Laplacian as a surrogate Fourier basis. We then describe how to find signals that, among all signals with the same spectral spread, have the smallest graph spread about a given vertex. For every possible spectral spread, the desired signal is the solution to an eigenvalue problem. Since localization in graph and spectral domains is a desirable property of the elements of wavelet frames on graphs, we compare the performance of some existing wavelet transforms to the obtained bound.",
"title": ""
},
{
"docid": "751f4c5c624445612b5d02cf353a6a7e",
"text": "In this paper, we report on the integration challenges of the various component technologies developed towards the establishment of a framework for deploying an adaptive system of heterogeneous robots for urban surveillance. In our integrated experiment and demonstration, aerial robots generate maps that are used to design navigation controllers and plan missions for the team. A team of ground robots constructs a radio signal strength map that is used as an aid for planning missions. Multiple robots establish a mobile, ad-hoc communication network that is aware of the radio signal strength between nodes and can adapt to changing conditions to maintain connectivity. Finally, the team of aerial and ground robots is able to monitor a small village, and search for and localize human targets by the color of the uniform, while ensuring that the information from the team is available to a remotely located human operator. The key component technologies and contributions include (a) mission specification and planning software; (b) exploration and mapping of radio signal strengths in an urban environment; (c) programming abstractions and composition of controllers for multi-robot deployment; (d) cooperative control strategies for search, identification, and localization of targets; and (e) three-dimensional mapping in an urban setting.",
"title": ""
},
{
"docid": "d1cd251846175ea452416a3a4c94a64f",
"text": "Faculty of Science Business Analytics",
"title": ""
},
{
"docid": "64ef634078467594df83fe4cec779c27",
"text": "In Natural Language Processing the sequence-to-sequence, encoder-decoder model is very successful in generating sentences, as are the tasks of dialogue, translation and question answering. On top of this model an attention mechanism is often used. The attention mechanism has the ability to look back at all encoder outputs for every decoding step. The performance increase of attention shows that the final encoded state of an input sequence alone is too poor to successfully generate a target. In this paper more elaborate forms of attention, namely external memory, are investigated on varying properties within the range of dialogue. In dialogue, the target sequence is much more complex to predict than in other tasks, since the sequence can be of arbitrary length and can contain any information related to any of the previous utterances. External memory is hypothesized to improve performance exactly because of these properties of dialogue. Varying memory models are tested on a range of context sizes. Some memory modules show more stable results with an increasing context size.",
"title": ""
},
{
"docid": "ce1f9cbd9cedf63d7e5bea0bfae415c4",
"text": "The present study is an attempt to discover some of the statistically significant outlines motivations and factors that influence the quality in e-ticketing, which affects customers’ perceptions, preferences, and intentions. Consumers, especially business professions – the subjects of this study – are constantly demanding higher quality e-commerce services. All three hypotheses were found to be highly significant and positively related to promoting the perceived value of e-ticketing technologies, especially for sport-related events. Based on technology adoption models, e-ticketing does provide significant levels of perceived value and its linkage to customer satisfaction are important factors as well as operational costs. It seems obvious that box office staffs will become smaller in size as more e-ticketing devices and acceptance increases. Technological applications should continue to grow, and eventual acceptance of ticket kiosks, wireless ticket purchases, will undoubtedly change from being an industry rarity to an industry standard.",
"title": ""
},
{
"docid": "9fc8d85122f1cf22e63ac2401531e448",
"text": "Recognizing multiple labels of images is a fundamental but challenging task in computer vision, and remarkable progress has been attained by localizing semantic-aware image regions and predicting their labels with deep convolutional neural networks. The step of hypothesis regions (region proposals) localization in these existing multi-label image recognition pipelines, however, usually takes redundant computation cost, e.g., generating hundreds of meaningless proposals with nondiscriminative information and extracting their features, and the spatial contextual dependency modeling among the localized regions are often ignored or over-simplified. To resolve these issues, this paper proposes a recurrent attention reinforcement learning framework to iteratively discover a sequence of attentional and informative regions that are related to different semantic objects and further predict label scores conditioned on these regions. Besides, our method explicitly models longterm dependencies among these attentional regions that help to capture semantic label co-occurrence and thus facilitate multilabel recognition. Extensive experiments and comparisons on two large-scale benchmarks (i.e., PASCAL VOC and MSCOCO) show that our model achieves superior performance over existing state-of-the-art methods in both performance and efficiency as well as explicitly identifying image-level semantic labels to specific object regions.",
"title": ""
},
{
"docid": "8b948819efed14853dcfeeabdb28c1be",
"text": "We derive a new self-organizing learning algorithm that maximizes the information transferred in a network of nonlinear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximization has extra properties not found in the linear case (Linsker 1989). The nonlinearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalization of principal components analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to 10 speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information maximization provides a unifying framework for problems in \"blind\" signal processing.",
"title": ""
}
] |
scidocsrr
|
b4d6f91b8d6c7bd20d3d5298e5b07f6a
|
The acoustic emotion gaussians model for emotion-based music annotation and retrieval
|
[
{
"docid": "1ace2a8a8c6b4274ac0891e711d13190",
"text": "Recent music information retrieval (MIR) research pays increasing attention to music classification based on moods expressed by music pieces. The first Audio Mood Classification (AMC) evaluation task was held in the 2007 running of the Music Information Retrieval Evaluation eXchange (MIREX). This paper describes important issues in setting up the task, including dataset construction and ground-truth labeling, and analyzes human assessments on the audio dataset, as well as system performances from various angles. Interesting findings include system performance differences with regard to mood clusters and the levels of agreement amongst human judgments regarding mood labeling. Based on these analyses, we summarize experiences learned from the first community scale evaluation of the AMC task and propose recommendations for future AMC and similar evaluation tasks.",
"title": ""
},
{
"docid": "c87fa26d080442b1527fcc6a74df7ec4",
"text": "We present MIRtoolbox, an integrated set of functions written in Matlab, dedicated to the extraction of musical features from audio files. The design is based on a modular framework: the different algorithms are decomposed into stages, formalized using a minimal set of elementary mechanisms, and integrating different variants proposed by alternative approaches – including new strategies we have developed –, that users can select and parametrize. This paper offers an overview of the set of features, related, among others, to timbre, tonality, rhythm or form, that can be extracted with MIRtoolbox. Four particular analyses are provided as examples. The toolbox also includes functions for statistical analysis, segmentation and clustering. Particular attention has been paid to the design of a syntax that offers both simplicity of use and transparent adaptiveness to a multiplicity of possible input types. Each feature extraction method can accept as argument an audio file, or any preliminary result from intermediary stages of the chain of operations. Also the same syntax can be used for analyses of single audio files, batches of files, series of audio segments, multichannel signals, etc. For that purpose, the data and methods of the toolbox are organised in an object-oriented architecture. 1. MOTIVATION AND APPROACH MIRToolbox is a Matlab toolbox dedicated to the extraction of musically-related features from audio recordings. It has been designed in particular with the objective of enabling the computation of a large range of features from databases of audio files, that can be applied to statistical analyses. Few softwares have been proposed in this area. The most important one, Marsyas [1], provides a general architecture for connecting audio, soundfiles, signal processing blocks and machine learning (see section 5 for more details). One particularity of our own approach relies in the use of the Matlab computing environment, which offers good visualisation capabilities and gives access to a large variety of other toolboxes. In particular, the MIRToolbox makes use of functions available in recommended public-domain toolboxes such as the Auditory Toolbox [2], NetLab [3], or SOMtoolbox [4]. Other toolboxes, such as the Statistics toolbox or the Neural Network toolbox from MathWorks, can be directly used for further analyses of the features extracted by MIRToolbox without having to export the data from one software to another. Such computational framework, because of its general objectives, could be useful to the research community in Music Information Retrieval (MIR), but also for educational purposes. For that reason, particular attention has been paid concerning the ease of use of the toolbox. In particular, complex analytic processes can be designed using a very simple syntax, whose expressive power comes from the use of an object-oriented paradigm. The different musical features extracted from the audio files are highly interdependent: in particular, as can be seen in figure 1, some features are based on the same initial computations. In order to improve the computational efficiency, it is important to avoid redundant computations of these common components. Each of these intermediary components, and the final musical features, are therefore considered as building blocks that can been freely articulated one with each other. Besides, in keeping with the objective of optimal ease of use of the toolbox, each building block has been conceived in a way that it can adapt to the type of input data. For instance, the computation of the MFCCs can be based on the waveform of the initial audio signal, or on the intermediary representations such as spectrum, or mel-scale spectrum (see Fig. 1). Similarly, autocorrelation is computed for different range of delays depending on the type of input data (audio waveform, envelope, spectrum). This decomposition of all the set of feature extraction algorithms into a common set of building blocks has the advantage of offering a synthetic overview of the different approaches studied in this domain of research. 2. FEATURE EXTRACTION 2.1. Feature overview Figure 1 shows an overview of the main features implemented in the toolbox. All the different processes start from the audio signal (on the left) and form a chain of operations proceeding to right. The vertical disposition of the processes indicates an increasing order of complexity of the operations, from simplest computation (top) to more detailed auditory modelling (bottom). Each musical feature is related to one of the musical dimensions traditionally defined in music theory. Boldface characters highlight features related to pitch, to tonality (chromagram, key strength and key Self-Organising Map, or SOM) and to dynamics (Root Mean Square, or RMS, energy). Bold italics indicate features related to rhythm, namely tempo, pulse clarity and fluctuation. Simple italics highlight a large set of features that can be associated to timbre. Among them, all the operators in grey italics can be in fact applied to many others different representations: for instance, statistical moments such as centroid, kurtosis, etc., can be applied to either spectra, envelopes, but also to histograms based on any given feature. One of the simplest features, zero-crossing rate, is based on a simple description of the audio waveform itself: it counts the number of sign changes of the waveform. Signal energy is computed using root mean square, or RMS [5]. The envelope of the audio signal offers timbral characteristics of isolated sonic event. FFT-based spectrum can be computed along the frequency domain or along Mel-bands, with linear or decibel energy scale, and",
"title": ""
},
{
"docid": "1419e2f53412b4ce2d6944bad163f13d",
"text": "Determining the emotion of a song that best characterizes the affective content of the song is a challenging issue due to the difficulty of collecting reliable ground truth data and the semantic gap between human's perception and the music signal of the song. To address this issue, we represent an emotion as a point in the Cartesian space with valence and arousal as the dimensions and determine the coordinates of a song by the relative emotion of the song with respect to other songs. We also develop an RBF-ListNet algorithm to optimize the ranking-based objective function of our approach. The cognitive load of annotation, the accuracy of emotion recognition, and the subjective quality of the proposed approach are extensively evaluated. Experimental results show that this ranking-based approach simplifies emotion annotation and enhances the reliability of the ground truth. The performance of our algorithm for valence recognition reaches 0.326 in Gamma statistic.",
"title": ""
}
] |
[
{
"docid": "26ea28fd0b803d36ff97600b4dc0c0d2",
"text": "This paper proposes a neural network architecture and training scheme to learn the start and end time of sound events (strong labels) in an audio recording given just the list of sound events existing in the audio without time information (weak labels). We achieve this by using a stacked convolutional and recurrent neural network with two prediction layers in sequence one for the strong followed by the weak label. The network is trained using frame-wise log melband energy as the input audio feature, and weak labels provided in the dataset as labels for the weak label prediction layer. Strong labels are generated by replicating the weak labels as many number of times as the frames in the input audio feature, and used for strong label layer during training. We propose to control what the network learns from the weak and strong labels by different weighting for the loss computed in the two prediction layers. The proposed method is evaluated on a publicly available dataset of 155 hours with 17 sound event classes. The method achieves the best error rate of 0.84 for strong labels and F-score of 43.3% for weak labels on the unseen test split.",
"title": ""
},
{
"docid": "68651d6e68de08701f36907adda152ba",
"text": "This article presents a case involving a 16-year-old boy who came to the Tripler Army Medical Center Oral and Maxillofacial Surgery with a central giant cell granuloma (CGCG) on the anterior mandible. Initial management consisted of surgical curettage and intralesional injection of corticosteroids. Upon completion of steroid therapy, there was clinical and radiographic evidence of remission; however, radiographic evidence of lesion recurrence was seen at a six-month follow-up visit. The CGCG was retreated with curettage and five months of systemic injections of calcitonin, both of which failed. The lesion was most likely an aggressive form of CGCG that progressed despite conservative therapy, with destruction of hard and soft tissues, root resorption, tooth displacement, and paraesthesia in the anterior mandible. The authors present a treatment algorithm with comprehensive management involving surgical resection, reconstruction, orthodontics, and orthognathic surgery with prosthodontic considerations.",
"title": ""
},
{
"docid": "6123b06d93afe26911bc501f0d51c3ad",
"text": "Results of several investigations indicate that eye movements can reveal memory for elements of previous experience. These effects of memory on eye movement behavior can emerge very rapidly, changing the efficiency and even the nature of visual processing without appealing to verbal reports and without requiring conscious recollection. This aspect of eye movement based memory investigations is particularly useful when eye movement methods are used with special populations (e.g., young children, elderly individuals, and patients with severe amnesia), and also permits use of comparable paradigms in animals and humans, helping to bridge different memory literatures and permitting cross-species generalizations. Unique characteristics of eye movement methods have produced findings that challenge long-held views about the nature of memory, its organization in the brain, and its failures in special populations. Recently, eye movement methods have been successfully combined with neuroimaging techniques such as fMRI, single-unit recording, and magnetoencephalography, permitting more sophisticated investigations of memory. Ultimately, combined use of eye-tracking with neuropsychological and neuroimaging methods promises to provide a more comprehensive account of brain-behavior relationships and adheres to the \"converging evidence\" approach to cognitive neuroscience.",
"title": ""
},
{
"docid": "6bdb8048915000b2d6c062e0e71b8417",
"text": "Depressive disorders are the most typical disease affecting many different factors of humanity. University students may be at increased risk of depression owing to the pressure and stress they encounter. Therefore, the purpose of this study is comparing the level of depression among male and female athletes and non-athletes undergraduate student of private university in Esfahan, Iran. The participants in this research are composed of 400 male and female athletes as well as no-athletes Iranian undergraduate students. The Beck depression test (BDI) was employed to measure the degree of depression. T-test was used to evaluate the distinction between athletes and non-athletes at P≤0.05. The ANOVA was conducted to examine whether there was a relationship between level of depression among non-athletes and athletes. The result showed that the prevalence rate of depression among non-athlete male undergraduate students is significantly higher than that of athlete male students. The results also presented that level of depression among female students is much more frequent compared to males. This can be due to the fatigue and lack of energy that are more frequent among female in comparison to the male students. Physical activity was negatively related to the level of depression by severity among male and female undergraduate students. However, there is no distinct relationship between physical activity and level of depression according to the age of athlete and nonathlete male and female undergraduate students. This study has essential implications for clinical psychology due to the relationship between physical activity and prevalence of depression.",
"title": ""
},
{
"docid": "43d03b41c03bb11f8a73e9b8247e197a",
"text": "UNLABELLED\nWhile cognitive behavior therapy has been found to be effective in the treatment of generalized anxiety disorder (GAD), a significant percentage of patients struggle with residual symptoms. There is some conceptual basis for suggesting that cultivation of mindfulness may be helpful for people with GAD. Mindfulness-based cognitive therapy (MBCT) is a group treatment derived from mindfulness-based stress reduction (MBSR) developed by Jon Kabat-Zinn and colleagues. MBSR uses training in mindfulness meditation as the core of the program. MBCT incorporates cognitive strategies and has been found effective in reducing relapse in patients with major depression (Teasdale, J. D., Segal, Z. V., Williams, J. M. G., Ridgeway, V., Soulsby, J., & Lau, M. (2000). Prevention of relapse/recurrence in major depression by mindfulness-based cognitive therapy. Journal of Consulting and Clinical Psychology, 6, 615-623).\n\n\nMETHOD\nEligible subjects recruited to a major academic medical center participated in the group MBCT course and completed measures of anxiety, worry, depressive symptoms, mood states and mindful awareness in everyday life at baseline and end of treatment.\n\n\nRESULTS\nEleven subjects (six female and five male) with a mean age of 49 (range=36-72) met criteria and completed the study. There were significant reductions in anxiety and depressive symptoms from baseline to end of treatment.\n\n\nCONCLUSION\nMBCT may be an acceptable and potentially effective treatment for reducing anxiety and mood symptoms and increasing awareness of everyday experiences in patients with GAD. Future directions include development of a randomized clinical trial of MBCT for GAD.",
"title": ""
},
{
"docid": "006347cd3839d9fabd983e7cc379322d",
"text": "Recent progress in both Artificial Intelligence (AI) and Robotics have enabled the development of general purpose robot platforms that are capable of executing a wide variety of complex, temporally extended service tasks in open environments. This article introduces a novel, custom-designed multi-robot platform for research on AI, robotics, and especially Human-Robot Interaction (HRI) for service robots. Called BWIBots, the robots were designed as a part of the Building-Wide Intelligence (BWI) project at the University of Texas at Austin. The article begins with a description of, and justification for, the hardware and software design decisions underlying the BWIBots, with the aim of informing the design of such platforms in the future. It then proceeds to present an overview of various research contributions that have enabled the BWIBots to better (i) execute action sequences to complete user requests, (ii) efficiently ask questions to resolve user requests, (iii) understand human commands given in natural language, and (iv) understand human intention from afar. The article concludes with a look forward towards future research opportunities and applications enabled by the BWIBot platform.",
"title": ""
},
{
"docid": "9516d06751aa51edb0b0a3e2b75e0bde",
"text": "This paper presents a pilot-based compensation algorithm for mitigation of frequency-selective I/Q imbalances in direct-conversion OFDM transmitters. By deploying a feedback loop from RF to baseband, together with a properly-designed pilot signal structure, the I/Q imbalance properties of the transmitter are efficiently estimated in a subcarrier-wise manner. Based on the obtained I/Q imbalance knowledge, the imbalance effects on the actual transmit waveform are then mitigated by baseband pre-distortion acting on the mirror-subcarrier signals. The compensation performance of the proposed structure is analyzed using extensive computer simulations, indicating that very high image rejection ratios can be achieved in practical system set-ups with reasonable pilot signal lengths.",
"title": ""
},
{
"docid": "744d9e4c48ed65ddee5f0fdb83014d63",
"text": "Overview of frequency domain measurement techniques of the complex permittivity at microwave frequencies is presented. The methods are divided into two categories: resonant and non-resonant ones. In the first category several methods are discussed such as cavity resonator techniques, dielectric resonator techniques, open resonator techniques and resonators for non-destructive testing. The general theory of measurements of different materials in resonant structures is presented showing mathematical background, sources of uncertainties and theoretical and experimental limits. Methods of measurement of anisotropic materials are presented. In the second category, transmission–reflection techniques are overviewed including transmission line cells as well as free-space techniques.",
"title": ""
},
{
"docid": "1e6583ec7a290488cd8e672ab59158b9",
"text": "Evidence-based guidelines for the management of patients with Lyme disease, human granulocytic anaplasmosis (formerly known as human granulocytic ehrlichiosis), and babesiosis were prepared by an expert panel of the Infectious Diseases Society of America. These updated guidelines replace the previous treatment guidelines published in 2000 (Clin Infect Dis 2000; 31[Suppl 1]:1-14). The guidelines are intended for use by health care providers who care for patients who either have these infections or may be at risk for them. For each of these Ixodes tickborne infections, information is provided about prevention, epidemiology, clinical manifestations, diagnosis, and treatment. Tables list the doses and durations of antimicrobial therapy recommended for treatment and prevention of Lyme disease and provide a partial list of therapies to be avoided. A definition of post-Lyme disease syndrome is proposed.",
"title": ""
},
{
"docid": "b8f1c6553cd97fab63eae159ae01797e",
"text": "0747-5632/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.chb.2010.02.004 * Corresponding author. E-mail address: [email protected] (M. Using computers with friends either in person or online has become ubiquitous in the life of most adolescents; however, little is known about the complex relation between this activity and friendship quality. This study examined direct support for the social compensation and rich-get-richer hypotheses among adolescent girls and boys by including social anxiety as a moderating factor. A sample of 1050 adolescents completed a survey in grade 9 and then again in grades 11 and 12. For girls, there was a main effect of using computers with friends on friendship quality; providing support for both hypotheses. For adolescent boys, however, social anxiety moderated this relation, supporting the social compensation hypothesis. These findings were identical for online communication and were stable throughout adolescence. Furthermore, participating in organized sports did not compensate for social anxiety for either adolescent girls or boys. Therefore, characteristics associated with using computers with friends may create a comfortable environment for socially anxious adolescents to interact with their peers which may be distinct from other more traditional adolescent activities. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fd0c18ecdcb46ed1569aa89c7e5105c3",
"text": "In recent years, weeds is responsible for most of the agricultural yield losses. To deal 1 with this threat, farmers resort to spraying pesticides throughout the field. Such method not only 2 requires huge quantities of herbicides but impact environment and humans health. One way to 3 reduce the cost and environmental impact is to allocate the right doses of herbicide at the right 4 place and at the right time (Precision Agriculture). Nowadays, Unmanned Aerial Vehicle (UAV) is 5 becoming an interesting acquisition system for weeds localization and management due to its ability 6 to obtain the images of the entire agricultural field with a very high spatial resolution and at low cost. 7 Despite the important advances in UAV acquisition systems, automatic weeds detection remains a 8 challenging problem because of its strong similarity with the crops. Recently Deep Learning approach 9 has shown impressive results in different complex classification problem. However, this approach 10 needs a certain amount of training data but, creating large agricultural datasets with pixel-level 11 annotations by expert is an extremely time consuming task. In this paper, we propose a novel fully 12 automatic learning method using Convolutional Neuronal Networks (CNNs) with unsupervised 13 training dataset collection for weeds detection from UAV images. The proposed method consists 14 in three main phases. First we automatically detect the crop lines and using them to identify the 15 interline weeds. In the second phase, interline weeds are used to constitute the training dataset. 16 Finally, we performed CNNs on this dataset to build a model able to detect the crop and weeds in the 17 images. The results obtained are comparable to the traditional supervised training data labeling. The 18 accuracy gaps are 1.5% in the spinach field and 6% in the bean field. 19",
"title": ""
},
{
"docid": "11cce2c0dae058a7d101387f58e00e5a",
"text": "It is a commonly held perception amongst biomechanists, sports medicine practitioners, baseball coaches and players, that an individual baseball player's style of throwing or pitching influences their performance and susceptibility to injury. With the results of a series of focus groups with baseball managers and pitching coaches in mind, the available scientific literature was reviewed regarding the contribution of individual aspects of pitching and throwing mechanics to potential for injury and performance. After a discussion of the limitations of kinematic and kinetic analyses, the individual aspects of pitching mechanics are discussed under arbitrary headings: Foot position at stride foot contact; Elbow flexion; Arm rotation; Arm horizontal abduction; Arm abduction; Lead knee position; Pelvic orientation; Deceleration-phase related issues; Curveballs; and Teaching throwing mechanics. In general, popular opinion of baseball coaching staff was found to be largely in concordance with the scientific investigations of biomechanists with several notable exceptions. Some difficulties are identified with the practical implementation of analyzing throwing mechanics in the field by pitching coaches, and with some unquantified aspects of scientific analyses. Key pointsBiomechanical analyses including kinematic and kinetic analyses allow for estimation of pitching performance and potential for injury.Some difficulties both theoretic and practical exist for the implementation and interpretation of such analyses.Commonly held opinions of baseball pitching authorities are largely held to concur with biomechanical analyses.Recommendations can be made regarding appropriate pitching and throwing technique in light of these investigations.",
"title": ""
},
{
"docid": "5459dc71fd40a576365f0afced64b6b7",
"text": "Cloud computing providers such as Amazon and Google have recently begun offering container-instances, which provide an efficient route to application deployment within a lightweight, isolated and well-defined execution environment. Cloud providers currently offer Container Service Platforms (CSPs), which orchestrate containerised applications. Existing CSP frameworks do not offer any form of intelligent resource scheduling: applications are usually scheduled individually, rather than taking a holistic view of all registered applications and available resources in the cloud. This can result in increased execution times for applications, resource wastage through underutilised container-instances, and a reduction in the number of applications that can be deployed, given the available resources. The research presented in this paper aims to extend existing systems by adding a cloud-based Container Management Service (CMS) framework that offers increased deployment density, scalability and resource efficiency. CMS provides additional functionalities for orchestrating containerised applications by joint optimisation of sets of containerised applications, and resource pool in multiple (geographical distributed) cloud regions. We evaluated CMS on a cloud-based CSP i.e., Amazon EC2 Container Management Service (ECS) and conducted extensive experiments using sets of CPU and Memory intensive containerised applications against the direct deployment strategy of Amazon ECS. The results show that CMS achieves up to 25% higher cluster utilisation, and up to 70% reduction in execution times.",
"title": ""
},
{
"docid": "66fd7de53986e8c4a7ed08ed88f0b45b",
"text": "BACKGROUND\nConcerns regarding the risk of estrogen replacement have resulted in a significant increase in the use of soy products by menopausal women who, despite the lack of evidence of the efficacy of such products, seek alternatives to menopausal hormone therapy. Our goal was to determine the efficacy of soy isoflavone tablets in preventing bone loss and menopausal symptoms.\n\n\nMETHODS\nThe study design was a single-center, randomized, placebo-controlled, double-blind clinical trial conducted from July 1, 2004, through March 31, 2009. Women aged 45 to 60 years within 5 years of menopause and with a bone mineral density T score of -2.0 or higher in the lumbar spine or total hip were randomly assigned, in equal proportions, to receive daily soy isoflavone tablets, 200 mg, or placebo. The primary outcome was changes in bone mineral density in the lumbar spine, total hip, and femoral neck at the 2-year follow-up. Secondary outcomes included changes in menopausal symptoms, vaginal cytologic characteristics, N -telopeptide of type I bone collagen, lipids, and thyroid function.\n\n\nRESULTS\nAfter 2 years, no significant differences were found between the participants receiving soy tablets (n = 122) and those receiving placebo (n = 126) regarding changes in bone mineral density in the spine (-2.0% and -2.3%, respectively), the total hip (-1.2% and -1.4%, respectively), or the femoral neck (-2.2% and -2.1%, respectively). A significantly larger proportion of participants in the soy group experienced hot flashes and constipation compared with the control group. No significant differences were found between groups in other outcomes.\n\n\nCONCLUSIONS\nIn this population, the daily administration of tablets containing 200 mg of soy isoflavones for 2 years did not prevent bone loss or menopausal symptoms.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT00076050.",
"title": ""
},
{
"docid": "2b0534f3d659e8eaea4d5b53af4617db",
"text": "Many organisations are currently involved in implementing Sustainable Supply Chain Management (SSCM) initiatives to address societal expectations and government regulations. Implementation of these initiatives has in turn created complexity due to the involvement of collection, management, control, and monitoring of a wide range of additional information exchanges among trading partners, which was not necessary in the past. Organisations thus would rely more on meaningful support from their IT function to help them implement and operate SSCM practices. Given the growing global recognition of the importance of sustainable supply chain (SSC) practices, existing corporate IT strategy and plans need to be revisited for IT to remain supportive and aligned with new sustainability aspirations of their organisations. Towards this goal, in this paper we report on the development of an IT maturity model specifically designed for SSCM context. The model is built based on four dimensions derived from software process maturity and IS/IT planning literatures. Our proposed model defines four progressive IT maturity stages for corporate IT function to support SSCM implementation initiatives. Some implications of the study finding and several challenges that may potentially hinder acceptance of the model by organisations are discussed.",
"title": ""
},
{
"docid": "a27ffbf7428fb863c30612342c61d757",
"text": "Social media provide a platform for users to express their opinions and share information. Understanding public health opinions on social media, such as Twitter, offers a unique approach to characterizing common health issues such as diabetes, diet, exercise, and obesity (DDEO); however, collecting and analyzing a large scale conversational public health data set is a challenging research task. The goal of this research is to analyze the characteristics of the general public’s opinions in regard to diabetes, diet, exercise and obesity (DDEO) as expressed on Twitter. A multi-component semantic and linguistic framework was developed to collect Twitter data, discover topics of interest about DDEO, and analyze the topics. From the extracted 4.5 million tweets, 8% of tweets discussed diabetes, 23.7% diet, 16.6% exercise, and 51.7% obesity. The strongest correlation among the topics was determined between exercise and obesity (p < .0002). Other notable correlations were: diabetes and obesity (p < .0005), and diet and obesity (p < .001). DDEO terms were also identified as subtopics of each of the DDEO topics. The frequent subtopics discussed along with “Diabetes”, excluding the DDEO terms themselves, were blood pressure, heart attack, yoga, and Alzheimer. The non-DDEO subtopics for “Diet” included vegetarian, pregnancy, celebrities, weight loss, religious, and mental health, while subtopics for “Exercise” included computer games, brain, fitness, and daily plan. Non-DDEO subtopics for “Obesity” included Alzheimer, cancer, and children. With 2.67 billion social media users in 2016, publicly available data such as Twitter posts can be utilized to support clinical providers, public health experts, and social scientists in better understanding common public opinions in regard to diabetes, diet, exercise, and obesity.",
"title": ""
},
{
"docid": "4d6540d6a200689721063bb7a92b71c3",
"text": "The recently-developed statistical method known as the \"bootstrap\" can be used to place confidence intervals on phylogenies. It involves resampling points from one's own data, with replacement, to create a series of bootstrap samples of the same size as the original data. Each of these is analyzed, and the variation among the resulting estimates taken to indicate the size of the error involved in making estimates from the original data. In the case of phylogenies, it is argued that the proper method of resampling is to keep all of the original species while sampling characters with replacement, under the assumption that the characters have been independently drawn by the systematist and have evolved independently. Majority-rule consensus trees can be used to construct a phylogeny showing all of the inferred monophyletic groups that occurred in a majority of the bootstrap samples. If a group shows up 95% of the time or more, the evidence for it is taken to be statistically significant. Existing computer programs can be used to analyze different bootstrap samples by using weights on the characters, the weight of a character being how many times it was drawn in bootstrap sampling. When all characters are perfectly compatible, as envisioned by Hennig, bootstrap sampling becomes unnecessary; the bootstrap method would show significant evidence for a group if it is defined by three or more characters.",
"title": ""
},
{
"docid": "3dd7efdc9de83cf59deea3688f95889f",
"text": "Keyword spotting (KWS) constitutes a major component of human-technology interfaces. Maximizing the detection accuracy at a low false alarm (FA) rate, while minimizing the footprint size, latency and complexity are the goals for KWS. Towards achieving them, we study Convolutional Recurrent Neural Networks (CRNNs). Inspired by large-scale state-ofthe-art speech recognition systems, we combine the strengths of convolutional layers and recurrent layers to exploit local structure and long-range context. We analyze the effect of architecture parameters, and propose training strategies to improve performance. With only ~230k parameters, our CRNN model yields acceptably low latency, and achieves 97.71% accuracy at 0.5 FA/hour for 5 dB signal-to-noise ratio.",
"title": ""
},
{
"docid": "b03523c80a2c1f481ad12b6589920639",
"text": "In order for machine learning to be deployed and trusted in many applications, it is crucial to be able to reliably explain why the machine learning algorithm makes certain predictions. For example, if an algorithm classifies a given pathology image to be a malignant tumor, then the doctor may need to know which parts of the image led the algorithm to this classification. How to interpret blackbox predictors is thus an important and active area of research. A fundamental question is: how much can we trust the interpretation itself? In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different interpretations. We systematically characterize the fragility of several widely-used feature-importance interpretation methods (saliency maps, integrated gradient, and DeepLIFT) on ImageNet and CIFAR-10. Our experiments show that even small random perturbation can change the feature importance and new systematic perturbations can lead to dramatically different interpretations without changing the label. We extend these results to show that interpretations based on exemplars (e.g. influence functions) are similarly fragile. Our analysis of the geometry of the Hessian matrix gives insight on why fragility could be a fundamental challenge to the current interpretation approaches.",
"title": ""
}
] |
scidocsrr
|
cc1fcef088f27f172514c4b87e63d68a
|
From Zero-Shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis
|
[
{
"docid": "d6564e6ab6b770792f7563377478fb18",
"text": "We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words.",
"title": ""
},
{
"docid": "fd0ed39ee4a5e8dcfce49228cf246d5f",
"text": "Minimization with orthogonality constraints (e.g., X>X = I) and/or spherical constraints (e.g., ‖x‖2 = 1) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, p-harmonic flows, 1-bit compressive sensing, matrix rank minimization, etc. These problems are difficult because the constraints are not only non-convex but numerically expensive to preserve during iterations. To deal with these difficulties, we apply the Cayley transform — a Crank-Nicolson-like update scheme — to preserve the constraints and based on it, develop curvilinear search algorithms with lower flops compared to those based on projections and geodesics. The efficiency of the proposed algorithms is demonstrated on a variety of test problems. In particular, for the maxcut problem, it exactly solves a decomposition formulation for the SDP relaxation. For polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems, the proposed algorithms run very fast and return solutions no worse than those from their stateof-the-art algorithms. For the quadratic assignment problem, a gap 0.842% to the best known solution on the largest problem “tai256c” in QAPLIB can be reached in 5 minutes on a typical laptop.",
"title": ""
},
{
"docid": "47e67d50a4fa53dc2a696fc04dc84ea7",
"text": "In this paper, we provide a new neural-network based perspective on multi-task learning (MTL) and multi-domain learning (MDL). By introducing the concept of a semantic descriptor, this framework unifies MDL and MTL as well as encompassing various classic and recent MTL/MDL algorithms by interpreting them as different ways of constructing semantic descriptors. Our interpretation provides an alternative pipeline for zero-shot learning (ZSL), where a model for a novel class can be constructed without training data. Moreover, it leads to a new and practically relevant problem setting of zero-shot domain adaptation (ZSDA), which is the analogous to ZSL but for novel domains: A model for an unseen domain can be generated by its semantic descriptor. Experiments across this range of problems demonstrate that our framework outperforms a variety of alternatives.",
"title": ""
}
] |
[
{
"docid": "8a6a26094a9752010bb7297ecc80cd15",
"text": "This paper provides standard instructions on how to protect short text messages with one-time pad encryption. The encryption is performed with nothing more than a pencil and paper, but provides absolute message security. If properly applied, it is mathematically impossible for any eavesdropper to decrypt or break the message without the proper key.",
"title": ""
},
{
"docid": "a3b4ef83e513e7541cd6c1517bf0f605",
"text": "All cellular proteins undergo continuous synthesis and degradation. This permanent renewal is necessary to maintain a functional proteome and to allow rapid changes in levels of specific proteins with regulatory purposes. Although for a long time lysosomes were considered unable to contribute to the selective degradation of individual proteins, the discovery of chaperone-mediated autophagy (CMA) changed this notion. Here, we review the characteristics that set CMA apart from other types of lysosomal degradation and the subset of molecules that confer cells the capability to identify individual cytosolic proteins and direct them across the lysosomal membrane for degradation.",
"title": ""
},
{
"docid": "e30d6fd14f091e188e6a6b86b6286609",
"text": "Assessing the spatio-temporal variations of surface water quality is important for water environment management. In this study, surface water samples are collected from 2008 to 2015 at 17 stations in the Ying River basin in China. The two pollutants i.e. chemical oxygen demand (COD) and ammonia nitrogen (NH3-N) are analyzed to characterize the river water quality. Cluster analysis and the seasonal Kendall test are used to detect the seasonal and inter-annual variations in the dataset, while the Moran's index is utilized to understand the spatial autocorrelation of the variables. The influence of natural factors such as hydrological regime, water temperature and etc., and anthropogenic activities with respect to land use and pollutant load are considered as driving factors to understand the water quality evolution. The results of cluster analysis present three groups according to the similarity in seasonal pattern of water quality. The trend analysis indicates an improvement in water quality during the dry seasons at most of the stations. Further, the spatial autocorrelation of water quality shows great difference between the dry and wet seasons due to sluices and dams regulation and local nonpoint source pollution. The seasonal variation in water quality is found associated with the climatic factors (hydrological and biochemical processes) and flow regulation. The analysis of land use indicates a good explanation for spatial distribution and seasonality of COD at the sub-catchment scale. Our results suggest that an integrated water quality measures including city sewage treatment, agricultural diffuse pollution control as well as joint scientific operations of river projects is needed for an effective water quality management in the Ying River basin.",
"title": ""
},
{
"docid": "9254b7c1f6a0393524d68aaa683dab58",
"text": "Millions of users share their opinions on Twitter, making it a valuable platform for tracking and analyzing public sentiment. Such tracking and analysis can provide critical information for decision making in various domains. Therefore it has attracted attention in both academia and industry. Previous research mainly focused on modeling and tracking public sentiment. In this work, we move one step further to interpret sentiment variations. We observed that emerging topics (named foreground topics) within the sentiment variation periods are highly related to the genuine reasons behind the variations. Based on this observation, we propose a Latent Dirichlet Allocation (LDA) based model, Foreground and Background LDA (FB-LDA), to distill foreground topics and filter out longstanding background topics. These foreground topics can give potential interpretations of the sentiment variations. To further enhance the readability of the mined reasons, we select the most representative tweets for foreground topics and develop another generative model called Reason Candidate and Background LDA (RCB-LDA) to rank them with respect to their “popularity” within the variation period. Experimental results show that our methods can effectively find foreground topics and rank reason candidates. The proposed models can also be applied to other tasks such as finding topic differences between two sets of documents.",
"title": ""
},
{
"docid": "a466b8da35f820eaaf597e1768b3e3f4",
"text": "The Internet of Things technology has been widely used in the quality tracking of agricultural products, however, the safety of storage for tracked data is still a serious challenge. Recently, with the expansion of blockchain technology applied in cross-industry field, the unchangeable features of its stored data provide us new vision about ensuring the storage safety for tracked data. Unfortunately, when the blockchain technology is directly applied in agricultural products tracking and data storage, it is difficult to automate storage and obtain the hash data stored in the blockchain in batches base on the identity. Addressing this issue, we propose a double-chain storage structure, and design a secured data storage scheme for tracking agricultural products based on blockchain. Specifically, the chained data structure is utilized to store the blockchain transaction hash, and together with the chain of the blockchain to form a double-chain storage, which ensures the data of agricultural products will not be maliciously tampered or destructed. Finally, in the practical application system, we verify the correctness and security of the proposed storage scheme.",
"title": ""
},
{
"docid": "24387104af78fd752c20764e81e4aaa5",
"text": "This paper considers the problem of tracking a dynamic sparse channel in a broadband wireless communication system. A probabilistic signal model is firstly proposed to describe the special features of temporal correlations of dynamic sparse channels: path delays change slowly over time, while path gains evolve faster. Based on such temporal correlations, we then propose the differential orthogonal matching pursuit (D-OMP) algorithm to track a dynamic sparse channel in a sequential way by updating the small channel variation over time. Compared with other channel tracking algorithms, simulation results demonstrate that the proposed D-OMP algorithm can track dynamic sparse channels faster with improved accuracy.",
"title": ""
},
{
"docid": "efc6daba6a41478f79b3a150274e6af0",
"text": "Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of (1) central pattern generator based control for generating basic rhythmic patterns and coordinated movements, (2) distributed (at each leg) recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and (3) searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps, leg damage adaptations, as well as climbing over high obstacles. Furthermore, we demonstrate that the newly developed recurrent network based approach to online forward models outperforms the adaptive neuron forward models, which have hitherto been the state of the art, to model a subset of similar walking behaviors in walking robots.",
"title": ""
},
{
"docid": "8bc615dfa51a9c5835660c1b0eb58209",
"text": "Large scale grid connected photovoltaic (PV) energy conversion systems have reached the megawatt level. This imposes new challenges on existing grid interface converter topologies and opens new opportunities to be explored. In this paper a new medium voltage multilevel-multistring configuration is introduced based on a three-phase cascaded H-bridge (CHB) converter and multiple string dc-dc converters. The proposed configuration enables a large increase of the total capacity of the PV system, while improving power quality and efficiency. The converter structure is very flexible and modular since it decouples the grid converter from the PV string converter, which allows to accomplish independent control goals. The main challenge of the proposed configuration is to handle the inherent power imbalances that occur not only between the different cells of one phase of the converter but also between the three phases. The control strategy to deal with these imbalances is also introduced in this paper. Simulation results of a 7-level CHB for a multistring PV system are presented to validate the proposed topology and control method.",
"title": ""
},
{
"docid": "774f1a2403acf459a4eb594c5772a362",
"text": "motion selection DTU Orbit (12/12/2018) ISSARS: An integrated software environment for structure-specific earthquake ground motion selection Current practice enables the design and assessment of structures in earthquake prone areas by performing time history analysis with the use of appropriately selected strong ground motions. This study presents a Matlab-based software environment, which is integrated with a finite element analysis package, and aims to improve the efficiency of earthquake ground motion selection by accounting for the variability of critical structural response quantities. This additional selection criterion, which is tailored to the specific structure studied, leads to more reliable estimates of the mean structural response quantities used in design, while fulfils the criteria already prescribed by the European and US seismic codes and guidelines. To demonstrate the applicability of the software environment developed, an existing irregular, multi-storey, reinforced concrete building is studied for a wide range of seismic scenarios. The results highlight the applicability of the software developed and the benefits of applying a structure-specific criterion in the process of selecting suites of earthquake motions for the seismic design and assessment. (C) 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cb62164bc5a582be0c45df28d8ebb797",
"text": "Android rooting enables device owners to freely customize their own devices and run useful apps that require root privileges. While useful, rooting weakens the security of Android devices and opens the door for malware to obtain privileged access easily. Thus, several rooting prevention mechanisms have been introduced by vendors, and sensitive or high-value mobile apps perform rooting detection to mitigate potential security exposures on rooted devices. However, there is a lack of understanding whether existing rooting prevention and detection methods are effective. To fill this knowledge gap, we studied existing Android rooting methods and performed manual and dynamic analysis on 182 selected apps, in order to identify current rooting detection methods and evaluate their effectiveness. Our results suggest that these methods are ineffective. We conclude that reliable methods for detecting rooting must come from integrity-protected kernels or trusted execution environments, which are difficult to bypass.",
"title": ""
},
{
"docid": "fdd4c5fc773aa001da927ab3776559ae",
"text": "We treated a 65-year-old Japanese man with a giant penile lymphedema due to chronic penile strangulation with a rubber band. He was referred to our hospital with progressive penile swelling that had developed over a period of 2 years from chronic use of a rubber band placed around the penile base for prevention of urinary incontinence. Under a diagnosis of giant penile lymphedema, we performed resection of abnormal penile skin weighing 4.8 kg, followed by a penile plasty procedure. To the best of our knowledge, this is only the seventh report of such a case worldwide, with the present giant penile lymphedema the most reported.",
"title": ""
},
{
"docid": "84845323a1dcb318bb01fef5346c604d",
"text": "This paper introduced a centrifugal impeller-based wall-climbing robot with the μCOS-II System. Firstly, the climber's basic configurations of mechanical were described. Secondly, the mechanic analyses of walking mechanism was presented, which was essential to the suction device design. Thirdly, the control system including the PC remote control system and the STM32 master slave system was designed. Finally, an experiment was conducted to test the performance of negative pressure generating system and general abilities of wall-climbing robot.",
"title": ""
},
{
"docid": "bb1081f8c28c3ebcfd37a4d7a3c09757",
"text": "There is increasing interest in using Field Programmable Gate Arrays (FPGAs) as platforms for computer architecture simulation. This paper is concerned with modeling superscalar processors with FPGAs. To be transformative, the FPGA modeling framework should meet three criteria. (1) Configurable: The framework should be able to model diverse superscalar processors, like a software model. In particular, it should be possible to vary superscalar parameters such as fetch, issue, and retire widths, depths of pipeline stages, queue sizes, etc. (2) Automatic: The framework should be able to automatically and efficiently map any one of its superscalar processor configurations to the FPGA. (3) Realistic: The framework should model a modern superscalar microarchitecture in detail, ideally with prototype quality, to enable a new era and depth of microarchitecture research. A framework that meets these three criteria will enjoy the convenience of a software model, the speed of an FPGA model, and the experience of a prototype. This paper describes FPGA-Sim, a configurable, automatically FPGA-synthesizable, and register-transfer-level (RTL) model of an out-of-order superscalar processor. FPGA-Sim enables FPGA modeling of diverse superscalar processors out-of-the-box. Moreover, its direct RTL implementation yields the fidelity of a hardware prototype.",
"title": ""
},
{
"docid": "7ea6a5d576e84e15d1da5c2256592fa5",
"text": "Context An optimal software development process is regarded as being dependent on the situational characteristics of individual software development settings. Such characteristics include the nature of the application(s) under development, team size, requirements volatility and personnel experience. However, no comprehensive reference framework of the situational factors affecting the software development process is presently available. Objective The absence of such a comprehensive reference framework of the situational factors affecting the software development process is problematic not just because it inhibits our ability to optimise the software development process, but perhaps more importantly, because it potentially undermines our capacity to ascertain the key constraints and characteristics of a software development setting. Method To address this deficiency, we have consolidated a substantial body of related research into an initial reference framework of the situational factors affecting the software development process. To support the data consolidation, we have applied rigorous data coding techniques from Grounded Theory and we believe that the resulting framework represents an important contribution to the software engineering field of knowledge. Results The resulting reference framework of situational factors consists of 8 classifications and 44 factors that inform the software process. We believe that the situational factor reference framework presented herein represents a sound initial reference framework for the key situational elements affecting the software process definition. Conclusion In addition to providing a useful reference listing for the research community and for committees engaged in the development of standards, the reference framework also provides support for practitioners who are challenged with defining and maintaining software development processes. Furthermore, this framework can be used to develop a profile of the situational characteristics of a software development setting, which in turn provides a sound foundation for software development process definition and optimisation.",
"title": ""
},
{
"docid": "d3f717f0e6b121e61740e4e0458e5920",
"text": "The anchor mechanism of Faster R-CNN and SSD framework is considered not effective enough to scene text detection, which can be attributed to its IoU based matching criterion between anchors and ground-truth boxes. In order to better enclose scene text instances of various shapes, it requires to design anchors of various scales, aspect ratios and even orientations manually, which makes anchor-based methods sophisticated and inefficient. In this paper, we propose a novel anchor-free region proposal network (AF-RPN) to replace the original anchor-based RPN in the Faster R-CNN framework to address the above problem. Compared with a vanilla RPN and FPN-RPN, AF-RPN can get rid of complicated anchor design and achieve higher recall rate on large-scale COCO-Text dataset. Owing to the high-quality text proposals, our Faster R-CNN based two-stage text detection approach achieves state-of-the-art results on ICDAR-2017 MLT, ICDAR-2015 and ICDAR-2013 text detection benchmarks when using single-scale and single-model (ResNet50) testing only.",
"title": ""
},
{
"docid": "da6c250c2be859c050dcbd93f17891c9",
"text": "Despite recent improvements in training methodology, discrete latent variable models have failed to achieve the performance and popularity of their continuous counterparts. Here, we evaluate several approaches to training large-scale image models on CIFAR-10 using a probabilistic variant of the recently proposed Vector Quantized VAE architecture. We find that biased estimators such as continuous relaxations provide reliable methods for training these models while unbiased score-function-based estimators like VIMCO struggle in high-dimensional discrete spaces. Furthermore, we observe that the learned discrete codes lie on low-dimensional manifolds, indicating that discrete latent variables can learn to represent continuous latent quantities. Our findings show that continuous relaxation training of discrete latent variable models is a powerful method for learning representations that can flexibly capture both continuous and discrete aspects of natural data.",
"title": ""
},
{
"docid": "cd4e20f76c050acbaec65e2cd4dd96d5",
"text": "To enhance profitability and guest satisfaction and loyalty, the organizations (hotels) should focus on implementing Customer Relationship Management (CRM) strategies that aim to seek, gather and store the right information, validate and share it throughout the organization . Hotel industry is a highly flourishing, lucrative and competitive market. To compete in such a market, the hotels should focus on maintaining good relations with the customers and satisfying the customers. Increasingly, the organizations are using Customer Relationship Management (CRM) to help boost sales and revenues by focusing on customer retention and customer loyalty. The present research was undertaken to study the Customer Relationship Management (CRM) practices in hotel industry. The purpose of this study was to determine the impact of Customer Relationship Management (CRM) on customer loyalty in the hotel industry. The study was conducted at the Hotel Taj Hotel, New Delhi. The objectives of the study were to determine if (CRM) has an impact on customer retention, to determine if the practice of effective CRM in organizations leads to a long or short term financial impact, to find out the extent or degree to which effective CRM leads to customer satisfaction and to assess if the services provided by the hotel meets the needs and wants of customers. It was found that most of the employees had a positive attitude towards CRM practices and the most common activities undertaken were studying the existing database of the customers and personal counseling. The benefits of CRM are increased customer satisfaction and increased customer retention.",
"title": ""
},
{
"docid": "b0d855c080b3862a287fdc505d08f913",
"text": "Over the past decade, the remote-sensing community has eagerly adopted unmanned aircraft systems (UAS) as a costeffective means to capture imagery at spatial and temporal resolutions not typically feasible with manned aircraft and satellites. The rapid adoption has outpaced our understanding of the relationships between data collection methods and data quality, causing uncertainties in data and products derived from UAS and necessitating exploration into how researchers are using UAS for terrestrial applications. We synthesize these procedures through a meta-analysis of UAS applications alongside a review of recent, basic science research surrounding theory and method development. We performed a search of the Web of Science (WoS) database on 17 May 2017 using UAS-related keywords to identify all peer-reviewed studies indexed by WoS. We manually filtered the results to retain only terrestrial studies (n 1⁄4 412) and further categorized results into basic theoretical studies (n 1⁄4 63), method development (n 1⁄4 63), and applications (n 1⁄4 286). After randomly selecting a subset of applications (n 1⁄4 108), we performed an in-depth content analysis to examine platforms, sensors, data capture parameters (e.g. flight altitude, spatial resolution, imagery overlap, etc.), preprocessing procedures (e.g. radiometric and geometric corrections), and analysis techniques. Our findings show considerable variation in UAS practices, suggesting a need for establishing standardized image collection and processing procedures. We reviewed basic research and methodological developments to assess how data quality and uncertainty issues are being addressed and found those findings are not necessarily being considered in application studies. ARTICLE HISTORY Received 30 September 2017 Accepted 5 December 2017",
"title": ""
},
{
"docid": "a30de4a213fe05c606fb16d204b9b170",
"text": "– The recent work on cross-country regressions can be compared to looking at “a black cat in a dark room”. Whether or not all this work has accomplished anything on the substantive economic issues is a moot question. But the search for “a black cat ” has led to some progress on the econometric front. The purpose of this paper is to comment on this progress. We discuss the problems with the use of cross-country panel data in the context of two problems: The analysis of economic growth and that of the purchasing power parity (PPP) theory. A propos de l’emploi des méthodes de panel sur des données inter-pays RÉSUMÉ. – Les travaux récents utilisant des régressions inter-pays peuvent être comparés à la recherche d'« un chat noir dans une pièce sans lumière ». La question de savoir si ces travaux ont apporté quelque chose de significatif à la connaissance économique est assez controversée. Mais la recherche du « chat noir » a conduit à quelques progrès en économétrie. L'objet de cet article est de discuter de ces progrès. Les problèmes posés par l'utilisation de panels de pays sont discutés dans deux contextes : celui de la croissance économique et de la convergence d'une part ; celui de la théorie de la parité des pouvoirs d'achat d'autre part. * G.S. MADDALA: Department of Economics, The Ohio State University. I would like to thank M. NERLOVE, P. SEVESTRE and an anonymous referee for helpful comments. Responsability for the omissions and any errors is my own. ANNALES D’ÉCONOMIE ET DE STATISTIQUE. – N° 55-56 – 1999 « The Gods love the obscure and hate the obvious » BRIHADARANYAKA UPANISHAD",
"title": ""
},
{
"docid": "b8505166c395750ee47127439a4afa1a",
"text": "Modern replicated data stores aim to provide high availability, by immediately responding to client requests, often by implementing objects that expose concurrency. Such objects, for example, multi-valued registers (MVRs), do not have sequential specifications. This paper explores a recent model for replicated data stores that can be used to precisely specify causal consistency for such objects, and liveness properties like eventual consistency, without revealing details of the underlying implementation. The model is used to prove the following results: An eventually consistent data store implementing MVRs cannot satisfy a consistency model strictly stronger than observable causal consistency (OCC). OCC is a model somewhat stronger than causal consistency, which captures executions in which client observations can use causality to infer concurrency of operations. This result holds under certain assumptions about the data store. Under the same assumptions, an eventually consistent and causally consistent replicated data store must send messages of unbounded size: If s objects are supported by n replicas, then, for every k > 1, there is an execution in which an Ω({n,s} k)-bit message is sent.",
"title": ""
}
] |
scidocsrr
|
93d3d1f2db0b9aca426c994afae64af3
|
Flowers or a robot army?: encouraging awareness & activity with personal, mobile displays
|
[
{
"docid": "1aeca45f1934d963455698879b1e53e8",
"text": "A sedentary lifestyle is a contributing factor to chronic diseases, and it is often correlated with obesity. To promote an increase in physical activity, we created a social computer game, Fish'n'Steps, which links a player’s daily foot step count to the growth and activity of an animated virtual character, a fish in a fish tank. As further encouragement, some of the players’ fish tanks included other players’ fish, thereby creating an environment of both cooperation and competition. In a fourteen-week study with nineteen participants, the game served as a catalyst for promoting exercise and for improving game players’ attitudes towards physical activity. Furthermore, although most player’s enthusiasm in the game decreased after the game’s first two weeks, analyzing the results using Prochaska's Transtheoretical Model of Behavioral Change suggests that individuals had, by that time, established new routines that led to healthier patterns of physical activity in their daily lives. Lessons learned from this study underscore the value of such games to encourage rather than provide negative reinforcement, especially when individuals are not meeting their own expectations, to foster long-term behavioral change.",
"title": ""
}
] |
[
{
"docid": "e28feb56ebc33a54d13452a2ea3a49f7",
"text": "Ping Yan, Hsinchun Chen, and Daniel Zeng Department of Management Information Systems University of Arizona, Tucson, Arizona [email protected]; {hchen, zeng}@eller.arizona.edu",
"title": ""
},
{
"docid": "0a58aa0c5dff94efa183fcf6fb7952f6",
"text": "When people explore new environments they often use landmarks as reference points to help navigate and orientate themselves. This research paper examines how spatial datasets can be used to build a system for use in an urban environment which functions as a city guide, announcing Features of Interest (FoI) as they become visible to the user (not just proximal), as the user moves freely around the city. Visibility calculations for the FoIs were pre-calculated based on a digital surface model derived from LIDAR (Light Detection and Ranging) data. The results were stored in a textbased relational database management system (RDBMS) for rapid retrieval. All interaction between the user and the system was via a speech-based interface, allowing the user to record and request further information on any of the announced FoI. A prototype system, called Edinburgh Augmented Reality System (EARS) , was designed, implemented and field tested in order to assess the effectiveness of these ideas. The application proved to be an innovating, ‘non-invasive’ approach to augmenting the user’s reality",
"title": ""
},
{
"docid": "1f463047c09ae83aa8e295327eab2f49",
"text": "This paper presents our system submitted to the EmotionX challenge. It is an emotion detection task on dialogues in the EmotionLines dataset. We formulate this as a hierarchical network where network learns data representation at both utterance level and dialogue level. Our model is inspired by Hierarchical Attention network (HAN) and uses pre-trained word embeddings as features. We formulate emotion detection in dialogues as a sequence labeling problem to capture the dependencies among labels. We report the performance accuracy for four emotions (anger, joy, neutral and sadness). The model achieved unweighted accuracy of 55.38% on Friends test dataset and 56.73% on EmotionPush test dataset. We report an improvement of 22.51% in Friends dataset and 36.04% in EmotionPush dataset over baseline results.",
"title": ""
},
{
"docid": "1094c5dfc72a27324753af3891b45369",
"text": "Recent studies demonstrate the effectiveness of Recurrent Neural Networks (RNNs) for action recognition in videos. However, previous works mainly utilize video-level category as supervision to train RNNs, which may prohibit RNNs to learn complex motion structures along time. In this paper, we propose a recurrent pose-attention network (RPAN) to address this challenge, where we introduce a novel pose-attention mechanism to adaptively learn pose-related features at every time-step action prediction of RNNs. More specifically, we make three main contributions in this paper. Firstly, unlike previous works on pose-related action recognition, our RPAN is an end-toend recurrent network which can exploit important spatialtemporal evolutions of human pose to assist action recognition in a unified framework. Secondly, instead of learning individual human-joint features separately, our poseattention mechanism learns robust human-part features by sharing attention parameters partially on the semanticallyrelated human joints. These human-part features are then fed into the human-part pooling layer to construct a highlydiscriminative pose-related representation for temporal action modeling. Thirdly, one important byproduct of our RPAN is pose estimation in videos, which can be used for coarse pose annotation in action videos. We evaluate the proposed RPAN quantitatively and qualitatively on two popular benchmarks, i.e., Sub-JHMDB and PennAction. Experimental results show that RPAN outperforms the recent state-of-the-art methods on these challenging datasets.",
"title": ""
},
{
"docid": "4568ac6c719c05cd1238828a24c00492",
"text": "The personalized recommender system is proposed to solve the problem of information overload and widely applied in many domains. The job recommender systems for job recruiting domain have emerged and enjoyed explosive growth in the last decades. User profiles and recommendation technologies in the job recommender system have gained attention and investigated in academia and implemented for some application cases in industries. In this paper, we introduce some basic concepts of user profile and some common recommendation technologies based on the existing research. Finally, we survey some typical job recommender systems which have been achieved and have a general comprehension of job recommender systems.",
"title": ""
},
{
"docid": "d76b4c234b72e0bf8615f224d5281e66",
"text": "Data centers are the heart of the global economy. In the mid-1990s, the costs of these large computing facilities were dominated by the costs of the information technology (IT) equipment that they housed, but no longer. As the electrical power used by IT equipment per dollar of equipment cost has increased, the annualized facility costs associated with powering and cooling IT equipment has in some cases grown to equal the annualized capital costs of the IT equipment itself. The trend towards ever more electricity-intensive IT equipment continues, which means that direct IT equipment acquisition costs will be a less important determinant of the economics of computing services in the future. Consider Figure ES-1, which shows the importance of different data center cost components as a function of power use per thousand dollars of server cost. If power per server cost continues to increase, the indirect power-related infrastructure costs will soon exceed the annualized direct cost of purchasing the IT equipment in the data center. Ken Brill of the Uptime Institute has called these trends \" the economic breakdown of Moore's Law \" , highlighting the growing importance of power-related indirect costs to the overall economics of information technology. The industry has in general assumed that the cost reductions and growth in computing speed related to Moore's law would continue unabated for years to come, and this may be true at the level of individual server systems. Unfortunately, far too little attention has been paid to the true total costs for data center facilities, in which the power-related indirect costs threaten to slow the cost reductions from Moore's law. These trends have important implications for the design, construction and operation of data centers. The companies delivering so-called \" cloud computing \" services have been aware of these economic trends for years, though the sophistication of their responses to them has varied. Most other companies that own data centers, for which computing is not their core business, have significantly lagged behind the vertically organized large-scale computing providers in addressing these issues. There are technical solutions for improving data center efficiency but the most important and most neglected solutions relate to institutional changes that can help companies focus on reducing the total costs of computing services. The first steps, of course, are to measure costs in a comprehensive way, eliminate institutional impediments, and reward those who successfully reduce these costs. This article assesses …",
"title": ""
},
{
"docid": "19d767f817c3036061433fde12043b60",
"text": "In this paper, we introduce an approach to automatically map a standard distributional semantic space onto a set-theoretic model. We predict that there is a functional relationship between distributional information and vectorial concept representations in which dimensions are predicates and weights are generalised quantifiers. In order to test our prediction, we learn a model of such relationship over a publicly available dataset of feature norms annotated with natural language quantifiers. Our initial experimental results show that, at least for domain-specific data, we can indeed map between formalisms, and generate high-quality vector representations which encapsulate set overlap information. We further investigate the generation of natural language quantifiers from such vectors.",
"title": ""
},
{
"docid": "7de71ae26d0efb98487aab1c8b112a23",
"text": "Hydroinformatics emerged in 1991 when numerical modelling of water expanded its range from one that was restricted to the modelling of flows to a much wider ranging sociotechnical discipline that supports stakeholders in addressing their water-related problems. However, despite numerous advances in hydroinformatics, the current practical and research effort is still very much technocratic (or techno-centric) which in turn may restrict the potential of hydroinformatics in its scope and its reach. This Special Issue, through the compilation of thirteen papers, illustrates some of the developments and applications in the field of hydroinformatics and marks the twenty-five years of its existence. We hope that this will help to further raise the awareness of the subject and its developments and applications. In the Editorial of this Special Issue, we briefly discuss the origin of hydroinformatics and we introduce the papers that are featuring in this Special Issue. We also give a way forward for future research and application.",
"title": ""
},
{
"docid": "7eec1e737523dc3b78de135fc71b058f",
"text": "Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This \"pyramid match\" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches",
"title": ""
},
{
"docid": "2e7d42b44affb9fa1c12833ea8b00a96",
"text": "The objective of this work is human pose estimation in videos, where multiple frames are available. We investigate a ConvNet architecture that is able to benefit from temporal context by combining information across the multiple frames using optical flow. To this end we propose a network architecture with the following novelties: (i) a deeper network than previously investigated for regressing heatmaps, (ii) spatial fusion layers that learn an implicit spatial model, (iii) optical flow is used to align heatmap predictions from neighbouring frames, and (iv) a final parametric pooling layer which learns to combine the aligned heatmaps into a pooled confidence map. We show that this architecture outperforms a number of others, including one that uses optical flow solely at the input layers, one that regresses joint coordinates directly, and one that predicts heatmaps without spatial fusion. The new architecture outperforms the state of the art by a large margin on three video pose estimation datasets, including the very challenging Poses in the Wild dataset, and outperforms other deep methods that don't use a graphical model on the single-image FLIC benchmark (and also [5, 35] in the high precision region).",
"title": ""
},
{
"docid": "561320dd717f1a444735dfa322dfbd31",
"text": "IEEE 802.11 based WLAN systems have gained interest to be used in the military and public authority environments, where the radio conditions can be harsh due to intentional jamming. The radio environment can be difficult also in commercial and civilian deployments since the unlicensed frequency bands are crowded. To study these problems, we built a test bed with a controlled signal path to measure the effects of different interfering signals to WLAN communications. We use continuous wideband noise jamming as the point of comparison, and focus on studying the effect of pulsed jamming and frequency sweep jamming. In addition, we consider also medium access control (MAC) interference. Based on the results, WLAN systems do not seem to be sensitive to the tested short noise jamming pulses. Under longer pulses, the effects are seen, and long data frames are more vulnerable to jamming than short ones. In fact, even a small amount of long frames in a data stream can ruin the performance of the whole link. Under frequency sweep jamming, slow sweeps with narrowband jamming signals can be quite harmful to WLAN communications. The results of MAC jamming show significant variation in performance between the different devices: The clear channel assessment (CCA) mechanism of some devices can be jammed very easily by using WLAN-like jamming signals. As a side product, the study also revealed some countermeasures against jamming.",
"title": ""
},
{
"docid": "e6c126454c7d7e99524ff55887d9b15d",
"text": "Dense 3D reconstruction of real world objects containing textureless, reflective and specular parts is a challenging task. Using general smoothness priors such as surface area regularization can lead to defects in the form of disconnected parts or unwanted indentations. We argue that this problem can be solved by exploiting the object class specific local surface orientations, e.g. a car is always close to horizontal in the roof area. Therefore, we formulate an object class specific shape prior in the form of spatially varying anisotropic smoothness terms. The parameters of the shape prior are extracted from training data. We detail how our shape prior formulation directly fits into recently proposed volumetric multi-label reconstruction approaches. This allows a segmentation between the object and its supporting ground. In our experimental evaluation we show reconstructions using our trained shape prior on several challenging datasets.",
"title": ""
},
{
"docid": "b9a84b723f946ab8c3dd17ae98b5868a",
"text": "For many NLP applications such as Information Extraction and Sentiment Detection, it is of vital importance to distinguish between synonyms and antonyms. While the general assumption is that distributional models are not suitable for this task, we demonstrate that using suitable features, differences in the contexts of synonymous and antonymous German adjective pairs can be identified with a simple word space model. Experimenting with two context settings (a simple windowbased model and a ‘co-disambiguation model’ to approximate adjective sense disambiguation), our best model significantly outperforms the 50% baseline and achieves 70.6% accuracy in a synonym/antonym classification task.",
"title": ""
},
{
"docid": "e399fd670b8b1f460d99ed06f04be41b",
"text": "Although the advantages of case study design are widely recognised, its original positivist underlying assumptions may mislead interpretive researchers aiming at theory building. The paper discusses the limitations of the case study design for theory building and explains how grounded theory systemic process adds to the case study design. The author reflects upon his experience in conducting research on the articulation of both traditional social networks and new virtual networks in six rural communities in Peru, using both case study design and grounded theory in a combined fashion in order to discover an emergent theory.",
"title": ""
},
{
"docid": "7ba37f2dcf95f36727e1cd0f06e31cc0",
"text": "The neonate receiving parenteral nutrition (PN) therapy requires a physiologically appropriate solution in quantity and quality given according to a timely, cost-effective strategy. Maintaining tissue integrity, metabolism, and growth in a neonate is challenging. To support infant growth and influence subsequent development requires critical timing for nutrition assessment and intervention. Providing amino acids to neonates has been shown to improve nitrogen balance, glucose metabolism, and amino acid profiles. In contrast, supplying the lipid emulsions (currently available in the United States) to provide essential fatty acids is not the optimal composition to help attenuate inflammation. Recent investigations with an omega-3 fish oil IV emulsion are promising, but there is need for further research and development. Complications from PN, however, remain problematic and include infection, hepatic dysfunction, and cholestasis. These complications in the neonate can affect morbidity and mortality, thus emphasizing the preference to provide early enteral feedings, as well as medication therapy to improve liver health and outcome. Potential strategies aimed at enhancing PN therapy in the neonate are highlighted in this review, and a summary of guidelines for practical management is included.",
"title": ""
},
{
"docid": "296b19294127fe35e2f6a7eef2670cd5",
"text": "The importance of establishing an information security culture in an organization has become a well established idea. The aim of such a culture is to address the various human factors that can affect an organization’s overall information security efforts. However, understandingboth the various elements of an information security culture, as well as the relationships between these elements, can still be problematic. Schein’s definition of a corporateculture is often used to aid understanding of an information security culture. This paper briefly introduces Schein’s model. It then incorporates the important role knowledge plays in information security into this definition. Finally, a conceptual framework to aid understanding of the interactions between the various elements of such a culture, is presented. This framework is explained by means of illustrative examples, and it is suggested that this conceptual framework can be a useful aid to understanding information security culture.",
"title": ""
},
{
"docid": "b28b1e14d8b4dac2a4695cf3e6bdc4b0",
"text": "An algorithm for generating Hilbert's space-filling curve in a byte-oriented manner is presented. In the context of one application of space-filling curves, the algorithm may be modified so that the results are correct for continua rather than for quantized spaces.",
"title": ""
},
{
"docid": "18487821406b5a262a72e1cb46a05d2b",
"text": "This study presents the applicability of an ensemble of artificial neural networks (ANNs) and learning paradigms for weather forecasting in southern Saskatchewan, Canada. The proposed ensemble method for weather forecasting has advantages over other techniques like linear combination. Generally, the output of an ensemble is a weighted sum, which are weight-fixed, with the weights being determined from the training or validation data. In the proposed approach, weights are determined dynamically from the respective certainties of the network outputs. The more certain a network seems to be of its decision, the higher the weight. The proposed ensemble model performance is contrasted with multi-layered perceptron network (MLPN), Elman recurrent neural network (ERNN), radial basis function network (RBFN), Hopfield model (HFM) predictive models and regression techniques. The data of temperature, wind speed and relative humidity are used to train and test the different models. With each model, 24-h-ahead forecasts are made for the winter, spring, summer and fall seasons. Moreover, the performance and reliability of the seven models are then evaluated by a number of statistical measures. Among the direct approaches employed, empirical results indicate that HFM is relatively less accurate and RBFN is relatively more reliable for the weather forecasting problem. In comparison, the ensemble of neural networks produced the most accurate forecasts.",
"title": ""
},
{
"docid": "76a7f7688238fb4c0d2dd2f817194302",
"text": "Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users’ political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral users – groups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.",
"title": ""
},
{
"docid": "183df189a37dc4c4a174792fb8464d3d",
"text": "Rule engines form an essential component of most service execution frameworks in a Service Oriented Architecture (SOA) ecosystem. The efficiency of a service execution framework critically depends on the performance of the rule engine it uses to manage it's operations. Most common rule engines suffer from the fundamental performance issues of the Rete algorithm that they internally use for faster matching of rules against incoming facts. In this paper, we present the design of a scalable architecture of a service rule engine, where a rule clustering and hashing based mechanism is employed for lazy loading of relevant service rules and a prediction based technique for rule evaluation is used for faster actuation of the rules. We present experimental results to demonstrate the efficacy of the proposed rule engine framework over contemporary ones.",
"title": ""
}
] |
scidocsrr
|
569aa9d08456e619a2357aa76957b1a5
|
Comparing Hybrid Peer-to-Peer Systems
|
[
{
"docid": "405fd8fd4d08cd26605b93f75c3038ae",
"text": "Query-processing costs on large text databases are dominated by the need to retrieve and scan the inverted list of each query term. Retrieval time for inverted lists can be greatly reduced by the use of compression, but this adds to the CPU time required. Here we show that the CPU component of query response time for conjunctive Boolean queries and for informal ranked queries can be similarly reduced, at little cost in terms of storage, by the inclusion of an internal index in each compressed inverted list. This method has been applied in a retrieval system for a collection of nearly two million short documents. Our experimental results show that the self-indexing strategy adds less than 20% to the size of the compressed inverted file, which itself occupies less than 10% of the indexed text, yet can reduce processing time for Boolean queries of 5-10 terms to under one fifth of the previous cost. Similarly, ranked queries of 40-50 terms can be evaluated in as little as 25% of the previous time, with little or no loss of retrieval effectiveness.",
"title": ""
}
] |
[
{
"docid": "62bf93deeb73fab74004cb3ced106bac",
"text": "Since the publication of the Design Patterns book, a large number of object-oriented design patterns have been identified and codified. As part of the pattern form, objectoriented design patterns must indicate their relationships with other patterns, but these relationships are typically described very briefly, and different collections of patterns describe different relationships in different ways. In this paper we describe and classify the common relationships between object oriented design patterns. Practitioners can use these relationships to help them identity those patterns which may be applicable to a particular problem, and pattern writers can use these relationships to help them integrate new patterns into the body of the patterns literature.",
"title": ""
},
{
"docid": "3fa1abd26925407bbf34716060a1a589",
"text": "Generating knowledge from data is an increasingly important activity. This process of data exploration consists of multiple tasks: data ingestion, visualization, statistical analysis, and storytelling. Though these tasks are complementary, analysts often execute them in separate tools. Moreover, these tools have steep learning curves due to their reliance on manual query specification. Here, we describe the design and implementation of DIVE, a web-based system that integrates state-of-the-art data exploration features into a single tool. DIVE contributes a mixed-initiative interaction scheme that combines recommendation with point-and-click manual specification, and a consistent visual language that unifies different stages of the data exploration workflow. In a controlled user study with 67 professional data scientists, we find that DIVE users were significantly more successful and faster than Excel users at completing predefined data visualization and analysis tasks.",
"title": ""
},
{
"docid": "31a2e6948a816a053d62e3748134cdc2",
"text": "In model-based reinforcement learning, generative and temporal models of environments can be leveraged to boost agent performance, either by tuning the agent’s representations during training or via use as part of an explicit planning mechanism. However, their application in practice has been limited to simplistic environments, due to the difficulty of training such models in larger, potentially partially-observed and 3D environments. In this work we introduce a novel action-conditioned generative model of such challenging environments. The model features a non-parametric spatial memory system in which we store learned, disentangled representations of the environment. Low-dimensional spatial updates are computed using a state-space model that makes use of knowledge on the prior dynamics of the moving agent, and high-dimensional visual observations are modelled with a Variational Auto-Encoder. The result is a scalable architecture capable of performing coherent predictions over hundreds of time steps across a range of partially observed 2D and 3D environments.",
"title": ""
},
{
"docid": "5c2b73276c9f845d7eef5c9dc4cea2a1",
"text": "The detection of QR codes, a type of 2D barcode, as described in the literature consists merely in the determination of the boundaries of the symbol region in images obtained with the specific intent of highlighting the symbol. However, many important applications such as those related with accessibility technologies or robotics, depends on first detecting the presence of a barcode in an environment. We employ Viola-Jones rapid object detection framework to address the problem of finding QR codes in arbitrarily acquired images. This framework provides an efficient way to focus the detection process in promising regions of the image and a very fast feature calculation approach for pattern classification. An extensive study of variations in the parameters of the framework for detecting finder patterns, present in three corners of every QR code, was carried out. Detection accuracy superior to 90%, with controlled number of false positives, is achieved. We also propose a post-processing algorithm that aggregates the results of the first step and decides if the detected finder patterns are part of QR code symbols. This two-step processing is done in real time.",
"title": ""
},
{
"docid": "3498872b0b87b9eaec44b2a8b4da6461",
"text": "4 At the present, emotion is considered as a critical point of human behaviour, and thus it should be embedded within the reasoning module when an intelligent system or a autonomous robot aims to emulate or anticipate human reactions. Therefore, current research in Artificial Intelligence shows an increasing interest in artificial emotion research for developing human-like systems. Based on Thayer’s emotion model and Fuzzy Cognitive Maps, this paper presents a proposal for forecasting artificial emotions. It provides an innovative method for forecasting artificial emotions and designing an affective decision system. This work includes an experiment with three simulated artificial scenarios for testing the proposal. Each scenario generate different emotions according to the artificial experimental model.",
"title": ""
},
{
"docid": "c215a497d39f4f95a9fc720debb14b05",
"text": "Adding frequency reconfigurability to a compact metamaterial-inspired antenna is investigated. The antenna is a printed monopole with an incorporated slot and is fed by a coplanar waveguide (CPW) line. This antenna was originally inspired from the concept of negative-refractive-index metamaterial transmission lines and exhibits a dual-band behavior. By using a varactor diode, the lower band (narrowband) of the antenna, which is due to radiation from the incorporated slot, can be tuned over a broad frequency range, while the higher band (broadband) remains effectively constant. A detailed equivalent circuit model is developed that predicts the frequency-tuning behavior for the lower band of the antenna. The circuit model shows the involvement of both CPW even and odd modes in the operation of the antenna. Experimental results show that, for a varactor diode capacitance approximately ranging from 0.1-0.7 pF, a tuning range of 1.6-2.23 GHz is achieved. The size of the antenna at the maximum frequency is 0.056 λ0 × 0.047 λ0 and the antenna is placed over a 0.237 λ0 × 0.111 λ0 CPW ground plane (λ0 being the wavelength in vacuum).",
"title": ""
},
{
"docid": "326b8725496032adad39e465879f4671",
"text": "This study compared the magnitude of muscle damage induced when consecutive drop jumps (DJs) were performed on sand vs. firm (wood) surfaces from a height of 0.6 m. Eight subjects performed DJs on a sand surface at a depth of 0.2 m (S condition), and 8 other subjects performed DJs on a wood surface (F condition). Each set consisted of 20 DJs with an interval of 10 seconds between jumps. Subjects performed 5 sets of DJs with 2 minutes between sets. Maximal isometric force, muscle soreness, and plasma creatine kinase (CK) activity were measured immediately before and immediately after the DJ exercise as well as 1, 24, 48, 72, and 96 hours after the DJ exercise. All measures changed significantly (p < 0.05) after exercise for both conditions; however, significantly (p < 0.05) smaller changes in these measures were evident for the S condition than for the F condition. These results show that DJs on a sand surface induce less muscle damage than on a firm surface. Training on sand may improve aerobic capacity or strength with a low risk of muscle damage.",
"title": ""
},
{
"docid": "35a85bb270f1140d4dbb1090fd1e26cc",
"text": "English. The Citation Contexts of a cited entity can be seen as little tesserae that, fit together, can be exploited to follow the opinion of the scientific community towards that entity as well as to summarize its most important contents. This mosaic is an excellent resource of information also for identifying topic specific synonyms, indexing terms and citers’ motivations, i.e. the reasons why authors cite other works. Is a paper cited for comparison, as a source of data or just for additional info? What is the polarity of a citation? Different reasons for citing reveal also different weights of the citations and different impacts of the cited authors that go beyond the mere citation count metrics. Identifying the appropriate Citation Context is the first step toward a multitude of possible analysis and researches. So far, Citation Context have been defined in several ways in literature, related to different purposes, domains and applications. In this paper we present different dimensions of Citation Context investigated by researchers through the years in order to provide an introductory review of the topic to anyone approaching this subject. Italiano. Possiamo pensare ai Contesti Citazionali come tante tessere che, unite, possono essere sfruttate per seguire l’opinione della comunità scientifica riguardo ad un determinato lavoro o per riassumerne i contenuti più importanti. Questo mosaico di informazioni può essere utilizzato per identificare sinonimi specifici e Index Terms nonchè per individuare i motivi degli autori dietro le citazioni. Identificare il Contesto Citazionale ottimale è il primo passo per numerose analisi e ricerche. Il Contesto Citazionale è stato definito in diversi modi in letteratura, in relazione a differenti scopi, domini e applicazioni. In questo paper presentiamo le principali dimensioni testuali di Contesto Citazionale investigate dai ricercatori nel corso degli",
"title": ""
},
{
"docid": "fbbf7c30f7ebcd2b9bbc9cc7877309b1",
"text": "People detection is essential in a lot of different systems. Many applications nowadays tend to require people detection to achieve certain tasks. These applications come under many disciplines, such as robotics, ergonomics, biomechanics, gaming and automotive industries. This wide range of applications makes human body detection an active area of research. With the release of depth sensors or RGB-D cameras such as Micosoft Kinect, this area of research became more active, specially with their affordable price. Human body detection requires the adaptation of many scenarios and situations. Various conditions such as occlusions, background cluttering and props attached to the human body require training on custom built datasets. In this paper we present an approach to prepare training datasets to detect and track human body with attached props. The proposed approach uses rigid body physics simulation to create and animate different props attached to the human body. Three scenarios are implemented. In the first scenario the prop is closely attached to the human body, such as a person carrying a backpack. In the second scenario, the prop is slightly attached to the human body, such as a person carrying a briefcase. In the third scenario the prop is not attached to the human body, such as a person dragging a trolley bag. Our approach gives results with accuracy of 93% in identifying both the human body parts and the attached prop in all the three scenarios.",
"title": ""
},
{
"docid": "fdbca2e02ac52afd687331048ddee7d3",
"text": "Type-2 fuzzy sets let us model and minimize the effects of uncertainties in rule-base fuzzy logic systems. However, they are difficult to understand for a variety of reasons which we enunciate. In this paper, we strive to overcome the difficulties by: 1) establishing a small set of terms that let us easily communicate about type-2 fuzzy sets and also let us define such sets very precisely, 2) presenting a new representation for type-2 fuzzy sets, and 3) using this new representation to derive formulas for union, intersection and complement of type-2 fuzzy sets without having to use the Extension Principle.",
"title": ""
},
{
"docid": "1071d0c189f9220ba59acfca06c5addb",
"text": "A 1.6 Gb/s receiver for optical communication has been designed and fabricated in a 0.25-/spl mu/m CMOS process. This receiver has no transimpedance amplifier and uses the parasitic capacitor of the flip-chip bonded photodetector as an integrating element and resolves the data with a double-sampling technique. A simple feedback loop adjusts a bias current to the average optical signal, which essentially \"AC couples\" the input. The resulting receiver resolves an 11 /spl mu/A input, dissipates 3 mW of power, occupies 80 /spl mu/m/spl times/50 /spl mu/m of area and operates at over 1.6 Gb/s.",
"title": ""
},
{
"docid": "c8a9919a2df2cfd730816cd0171f08dd",
"text": "In this paper, we propose a new deep network that learns multi-level deep representations for image emotion classi fication (MldrNet). Image emotion can be recognized through image semantics, image aesthetics and low-level visual fea tures from both global and local views. Existing image emotion classification works using hand-crafted features o r deep features mainly focus on either low-level visual featu res or semantic-level image representations without taking al l factors into consideration. Our proposed MldrNet unifies deep representations of three levels, i.e. image semantics , image aesthetics and low-level visual features through mul tiple instance learning (MIL) in order to effectively cope wit h noisy labeled data, such as images collected from the Intern et. Extensive experiments on both Internet images and abstract paintings demonstrate the proposed method outperforms the state-of-the-art methods using deep features or hand-craf ted features. The proposed approach also outperforms the state of-the-art methods with at least 6% performance improvement in terms of overall classification accuracy.",
"title": ""
},
{
"docid": "a436bdc20d63dcf4f0647005bb3314a7",
"text": "The purpose of this study is to evaluate the feasibility of the integration of concept maps and tablet PCs in anti-phishing education for enhancing students’ learning motivation and achievement. The subjects were 155 students from grades 8 and 9. They were divided into an experimental group (77 students) and a control group (78 students). To begin with, the two groups received identical anti-phishing training: the teacher explained the concept of anti-phishing and asked the students questions; the students then used tablet PCs for polling and answering the teachers’ questions. Afterwards, the two groups performed different group activities: the experimental group was divided into smaller groups, which used tablet PCs to draw concept maps; the control group was also divided into groups which completed worksheets. The study found that the use of concept maps on tablet PCs during the anti-phishing education significantly enhanced the students’ learning motivation when their initial motivation was already high. For learners with low initial motivation or prior knowledge, the use of worksheets could increase their posttest achievement and motivation. This study therefore proposes that motivation and achievement in teaching the anti-phishing concept can be effectively enhanced if the curriculum is designed based on the students’ learning preferences or prior knowledge, in conjunction with the integration of mature and accessible technological media into the learning activities. The findings can also serve as a reference for anti-phishing educators and researchers.",
"title": ""
},
{
"docid": "65cf5bc71931ba92c85dbecddaa3f86f",
"text": "BACKGROUND\nIt has been reported that mu-opioid receptor activation leads to a sustained increase in glutamate synaptic effectiveness at the N-methyl-D-aspartate (NMDA) receptor level, a system associated with central hypersensitivity to pain. One hypothesis is that postoperative pain may result partly from the activation of NMDA pain facilitatory processes induced by opiate treatment per se. The authors tested here the effectiveness of the opiate analgesic fentanyl for eliciting a delayed enhancement in pain sensitivity.\n\n\nMETHODS\nThe consequences of four bolus injections (every 15 min) of fentanyl (20-100 microg/kg per injection, subcutaneously) on immediate (for several hours) and long-term (for several days) sensitivity to nociceptive stimuli in the rat (paw-pressure vocalization test) were evaluated. The effects of the combination of the NMDA-receptor antagonist ketamine (10 mg/kg, subcutaneously) with fentanyl also were assessed.\n\n\nRESULTS\nFentanyl administration exhibited a biphasic time-dependent effect: first, an early response (for 2-5 h) associated with a marked increase in nociceptive threshold (analgesia), and second, a later response associated with sustained lowering of the nociceptive threshold (5 days for the longest effect) below the basal value (30% of decrease for the maximal effect) indicative of hyperalgesia. The higher the fentanyl dose used, the more pronounced was the fentanyl-induced hyperalgesia. Ketamine pretreatment, which had no analgesic effect on its own, enhanced the earlier response (analgesia) and prevented the development of long-lasting hyperalgesia.\n\n\nCONCLUSIONS\nFentanyl activates NMDA pain facilitatory processes, which oppose analgesia and lead to long-lasting enhancement in pain sensitivity.",
"title": ""
},
{
"docid": "6381c10a963b709c4af88047f38cc08c",
"text": "A great deal of research has been focused on solving the job-shop problem (ΠJ), over the last forty years, resulting in a wide variety of approaches. Recently, much effort has been concentrated on hybrid methods to solve ΠJ as a single technique cannot solve this stubborn problem. As a result much effort has recently been concentrated on techniques that combine myopic problem specific methods and a meta-strategy which guides the search out of local optima. These approaches currently provide the best results. Such hybrid techniques are known as iterated local search algorithms or meta-heuristics. In this paper we seek to assess the work done in the job-shop domain by providing a review of many of the techniques used. The impact of the major contributions is indicated by applying these techniques to a set of standard benchmark problems. It is established that methods such as Tabu Search, Genetic Algorithms, Simulated Annealing should be considered complementary rather than competitive. In addition this work suggests guide-lines on features that should be incorporated to create a good ΠJ system. Finally the possible direction for future work is highlighted so that current barriers within ΠJ maybe surmounted as we approach the 21st Century.",
"title": ""
},
{
"docid": "7a619f349e8b62b016db98e7526c04a6",
"text": "Although sensor noise is generally known as a very reliable means to uniquely identify digital cameras, care has to be taken with respect to camera model characteristics that may cause false accusations. While earlier reports focused on so-called linear patterns with a regular grid structure, also distortions due to geometric corrections of radial lens distortion have recently gained interest. Here, we report observations from a case study with the 'Dresden Image Database' that revealed further artefacts. We found diagonal line artefacts in Nikon CoolPix S710 sensor noise, as well as non-trivial dependencies between sensor noise, exposure time (FujiFilm J50) and focal length (Casio EX-Z150). At slower shutter speeds, original J50 images exhibit a slight horizontal shift, whereas EX-Z150 images exhibit irregular geometric distortions, which depend on the focal length and which become visible in the p-map of state-of-the-art resampling detectors. The observed artefacts may provide valuable clues for camera model identification, but also call for particular attention when creating reference noise patterns for applications that require low false negative rates.",
"title": ""
},
{
"docid": "fa403300ccf820da5be63e6be3dc8e8f",
"text": "The perception of traffic related objects in the vehicles environment is an essential prerequisite for future autonomous driving. Cameras are particularly suited for this task, as the traffic relevant information of a scene is inferable from its visual appearance. In traffic scene understanding, semantic segmentation denotes the task of generating and labeling regions in the image that correspond to specific object categories, such as cars or road area. In contrast, the task of scene recognition assigns a global label to an image, that reflects the overall category of the scene. This paper presents a deep neural network (DNN) capable of solving both problems in a computationally efficient manner. The architecture is designed to avoid redundant computations, as the task specific decoders share a common feature encoder stage. A novel Hadamard layer with element-wise weights efficiently exploits spatial priors for the segmentation task. Traffic scene segmentation is investigated in conjunction with road topology recognition based on the cityscapes dataset [1] augmented with manually labeled road topology ground truth data.",
"title": ""
},
{
"docid": "a5e03e76925c838cfdfc328552c9e901",
"text": "OBJECTIVE\nIn this article, we describe some of the cognitive and system-based sources of detection and interpretation errors in diagnostic radiology and discuss potential approaches to help reduce misdiagnoses.\n\n\nCONCLUSION\nEvery radiologist worries about missing a diagnosis or giving a false-positive reading. The retrospective error rate among radiologic examinations is approximately 30%, with real-time errors in daily radiology practice averaging 3-5%. Nearly 75% of all medical malpractice claims against radiologists are related to diagnostic errors. As medical reimbursement trends downward, radiologists attempt to compensate by undertaking additional responsibilities to increase productivity. The increased workload, rising quality expectations, cognitive biases, and poor system factors all contribute to diagnostic errors in radiology. Diagnostic errors are underrecognized and underappreciated in radiology practice. This is due to the inability to obtain reliable national estimates of the impact, the difficulty in evaluating effectiveness of potential interventions, and the poor response to systemwide solutions. Most of our clinical work is executed through type 1 processes to minimize cost, anxiety, and delay; however, type 1 processes are also vulnerable to errors. Instead of trying to completely eliminate cognitive shortcuts that serve us well most of the time, becoming aware of common biases and using metacognitive strategies to mitigate the effects have the potential to create sustainable improvement in diagnostic errors.",
"title": ""
},
{
"docid": "949cec5752a66a67b0b2d101ea071171",
"text": "People wish to enjoy their everyday lives in various ways, among which entertainment plays a major role. In order to improve lifestyle with more ready access to entertainment content, we propose BlueTorrent, a P2P file sharing application based on ubiquitous Blue tooth-enabled devices such as PDAs, cellphones and smart phones. Using BlueTorrent, people can share audio/video contents as they move about shopping malls, airports, subway stations etc. BlueTorrent poses new challenges caused by limited bandwidth, short communications range, mobile users and variable population density. A key ingredient is efficient peer discovery. This paper approaches the problem by analyzing the Bluetooth periodic inquiry mode and by finding the optimum inquiry/connection time settings. At the application layer, the BlueTorrent index/block dissemination protocol is then designed and analyzed. The entire system is integrated and implemented both in simulation and in an experimental testbed. Simulation and measurement results are used to evaluate and validate the performance of BlueTorrent in content sharing scenarios",
"title": ""
},
{
"docid": "751e95c13346b18714c5ce5dcb4d1af2",
"text": "Purpose – The purpose of this paper is to propose how to minimize the risks of implementing business process reengineering (BPR) by measuring readiness. For this purpose, the paper proposes an assessment approach for readiness in BPR efforts based on the critical success and failure factors. Design/methodology/approach – A relevant literature review, which investigates success and failure indicators in BPR efforts is carried out and a new categorized list of indicators are proposed. This is a base for conducting a survey to measure the BPR readiness, which has been run in two companies and compared based on a diamond model. Findings – In this research, readiness indicators are determined based on critical success and failure factors. The readiness indicators include six categories. The first five categories, egalitarian leadership, collaborative working environment, top management commitment, supportive management, and use of information technology are positive indicators. The sixth category, resistance to change has a negative role. This paper reports survey results indicating BPR readiness in two Iranian companies. After comparing the position of the two cases, the paper offers several guidelines for amplifying the success points and decreasing failure points and hence, increasing the rate of success. Originality/value – High-failure rate of BPR has been introduced as a main barrier in reengineering processes. In addition, it makes a fear, which in turn can be a failure factor. This paper tries to fill the gap in the literature on decreasing risk in BPR projects by introducing a BPR readiness assessment approach. In addition, the proposed questionnaire is generic and can be utilized in a facilitated manner.",
"title": ""
}
] |
scidocsrr
|
b8b20cfa9bc0b2a188a62731bd5af6eb
|
Data + Intuition: A Hybrid Approach to Developing Product North Star Metrics
|
[
{
"docid": "ce53aa803d587301a47166c483ecec34",
"text": "Boosting takes on various forms with different programs using different loss functions, different base models, and different optimization schemes. The gbm package takes the approach described in [3] and [4]. Some of the terminology differs, mostly due to an effort to cast boosting terms into more standard statistical terminology (e.g. deviance). In addition, the gbm package implements boosting for models commonly used in statistics but not commonly associated with boosting. The Cox proportional hazard model, for example, is an incredibly useful model and the boosting framework applies quite readily with only slight modification [7]. Also some algorithms implemented in the gbm package differ from the standard implementation. The AdaBoost algorithm [2] has a particular loss function and a particular optimization algorithm associated with it. The gbm implementation of AdaBoost adopts AdaBoost’s exponential loss function (its bound on misclassification rate) but uses Friedman’s gradient descent algorithm rather than the original one proposed. So the main purposes of this document is to spell out in detail what the gbm package implements.",
"title": ""
},
{
"docid": "9500e8bbbb21df9cde0b2e4b8ea72d89",
"text": "The practice of crowdsourcing is transforming the Web and giving rise to a new field.",
"title": ""
},
{
"docid": "6d620c1862b053c97e3ce29a415550e1",
"text": "To understand whether a user is satisfied with the current search results, implicit behavior is a useful data source, with clicks being the best-known implicit signal. However, it is possible for a non-clicking user to be satisfied and a clicking user to be dissatisfied. Here we study additional implicit signals based on the relationship between the user's current query and the next query, such as their textual similarity and the inter-query time. Using a large unlabeled dataset, a labeled dataset of queries and a labeled dataset of user tasks, we analyze the relationship between these signals. We identify an easily-implemented rule that indicates dissatisfaction: that a similar query issued within a time interval that is short enough (such as five minutes) implies dissatisfaction. By incorporating additional query-based features in the model, we show that a query-based model (with no click information) can indicate satisfaction more accurately than click-based models. The best model uses both query and click features. In addition, by comparing query sequences in successful tasks and unsuccessful tasks, we observe that search success is an incremental process for successful tasks with multiple queries.",
"title": ""
}
] |
[
{
"docid": "0994065c757a88373a4d97e5facfee85",
"text": "Scholarly literature suggests digital marketing skills gaps in industry, but these skills gaps are not clearly identified. The research aims to specify any digital marketing skills gaps encountered by professionals working in communication industries. In-depth interviews were undertaken with 20 communication industry professionals. A focus group followed, testing the rigour of the data. We find that a lack of specific technical skills; a need for best practice guidance on evaluation metrics, and a lack of intelligent futureproofing for dynamic technological change and development are skills gaps currently challenging the communication industry. However, the challenge of integrating digital marketing approaches with established marketing practice emerges as the key skills gap. Emerging from the key findings, a Digital Marketer Model was developed, highlighting the key competencies and skills needed by an excellent digital marketer. The research concludes that guidance on best practice, focusing upon evaluation metrics, futureproofing and strategic integration, needs to be developed for the communication industry. The Digital Marketing Model should be subject to further testing in industry and academia. Suggestions for further research are discussed.",
"title": ""
},
{
"docid": "be66c05a023ea123a6f32614d2a8af93",
"text": "During the past three decades, the issue of processing spectral phase has been largely neglected in speech applications. There is no doubt that the interest of speech processing community towards the use of phase information in a big spectrum of speech technologies, from automatic speech and speaker recognition to speech synthesis, from speech enhancement and source separation to speech coding, is constantly increasing. In this paper, we elaborate on why phase was believed to be unimportant in each application. We provide an overview of advancements in phase-aware signal processing with applications to speech, showing that considering phase-aware speech processing can be beneficial in many cases, while it can complement the possible solutions that magnitude-only methods suggest. Our goal is to show that phase-aware signal processing is an important emerging field with high potential in the current speech communication applications. The paper provides an extended and up-to-date bibliography on the topic of phase aware speech processing aiming at providing the necessary background to the interested readers for following the recent advancements in the area. Our review expands the step initiated by our organized special session and exemplifies the usefulness of spectral phase information in a wide range of speech processing applications. Finally, the overview will provide some future work directions.",
"title": ""
},
{
"docid": "355d040cf7dd706f08ef4ce33d53a333",
"text": "Conversational participants tend to immediately and unconsciously adapt to each other’s language styles: a speaker will even adjust the number of articles and other function words in their next utterance in response to the number in their partner’s immediately preceding utterance. This striking level of coordination is thought to have arisen as a way to achieve social goals, such as gaining approval or emphasizing difference in status. But has the adaptation mechanism become so deeply embedded in the language-generation process as to become a reflex? We argue that fictional dialogs offer a way to study this question, since authors create the conversations but don’t receive the social benefits (rather, the imagined characters do). Indeed, we find significant coordination across many families of function words in our large movie-script corpus. We also report suggestive preliminary findings on the effects of gender and other features; e.g., surprisingly, for articles, on average, characters adapt more to females than to males.",
"title": ""
},
{
"docid": "fd0dccac0689390e77a0cc1fb14e5a34",
"text": "Chromatin remodeling is a complex process shaping the nucleosome landscape, thereby regulating the accessibility of transcription factors to regulatory regions of target genes and ultimately managing gene expression. The SWI/SNF (switch/sucrose nonfermentable) complex remodels the nucleosome landscape in an ATP-dependent manner and is divided into the two major subclasses Brahma-associated factor (BAF) and Polybromo Brahma-associated factor (PBAF) complex. Somatic mutations in subunits of the SWI/SNF complex have been associated with different cancers, while germline mutations have been associated with autism spectrum disorder and the neurodevelopmental disorders Coffin–Siris (CSS) and Nicolaides–Baraitser syndromes (NCBRS). CSS is characterized by intellectual disability (ID), coarsening of the face and hypoplasia or absence of the fifth finger- and/or toenails. So far, variants in five of the SWI/SNF subunit-encoding genes ARID1B, SMARCA4, SMARCB1, ARID1A, and SMARCE1 as well as variants in the transcription factor-encoding gene SOX11 have been identified in CSS-affected individuals. ARID2 is a member of the PBAF subcomplex, which until recently had not been linked to any neurodevelopmental phenotypes. In 2015, mutations in the ARID2 gene were associated with intellectual disability. In this study, we report on two individuals with private de novo ARID2 frameshift mutations. Both individuals present with a CSS-like phenotype including ID, coarsening of facial features, other recognizable facial dysmorphisms and hypoplasia of the fifth toenails. Hence, this study identifies mutations in the ARID2 gene as a novel and rare cause for a CSS-like phenotype and enlarges the list of CSS-like genes.",
"title": ""
},
{
"docid": "ba2ecc0656041cfee29fde2439345c39",
"text": "Software defined radios (SDR) are highly configurable hardware platforms that provide the technology for realizing the rapidly expanding third (and future) generation digital wireless communication infrastructure. While there are a number of silicon alternatives available for implementing the various functions in a SDR, field programmable gate arrays (FPGAs) are an attractive option for many of these tasks for reasons of performance, power consumption and flexibility. Amongst the more complex tasks performed in a high data rate wireless system is synchronization. This paper examines carrier synchronization in SDRs using FPGA based signal processors. We provide a tutorial style overview of carrier recovery techniques for QPSK and QAM modulation schemes and report on the design and FPGA implementation of a carrier recovery loop for a 16-QAM modern. Two design alternatives are presented to highlight the rich design space accessible using configurable logic. The FPGA device utilization and performance for a carrier recovery circuit using a look-up table approach and CORDIC arithmetic are presented. The simulation and FPGA implementation process using a recent system level design tool called System GeneratorTM for DSP described.",
"title": ""
},
{
"docid": "f8c6906f4d0deb812e42aaaff457a6d9",
"text": "By the early 1900s, Euro-Americans had extirpated gray wolves (Canis lupus) from most of the contiguous United States. Yellowstone National Park was not immune to wolf persecution and by the mid-1920s they were gone. After seven decades of absence in the park, gray wolves were reintroduced in 1995–1996, again completing the large predator guild (Smith et al. 2003). Yellowstone’s ‘‘experiment in time’’ thus provides a rare opportunity for studying potential cascading effects associated with the extirpation and subsequent reintroduction of an apex predator. Wolves represent a particularly important predator of large mammalian prey in northern hemisphere ecosystems by virtue of their group hunting and year-round activity (Peterson et al. 2003) and can have broad top-down effects on the structure and functioning of these systems (Miller et al. 2001, Soulé et al. 2003, Ray et al. 2005). If a tri-trophic cascade involving wolves–elk (Cervus elaphus)–plants is again underway in northern Yellowstone, theory would suggest two primary mechanisms: (1) density mediation through prey mortality and (2) trait mediation involving changes in prey vigilance, habitat use, and other behaviors (Brown et al. 1999, Berger 2010). Both predator-caused reductions in prey numbers and fear responses they elicit in prey can lead to cascading trophic-level effects across a wide range of biomes (Beschta and Ripple 2009, Laundré et al. 2010, Terborgh and Estes 2010). Thus, the occurrence of a trophic cascade could have important implications not only to the future structure and functioning of northern Yellowstone’s ecosystems but also for other portions of the western United States where wolves have been reintroduced, are expanding their range, or remain absent. However, attempting to identify the occurrence of a trophic cascade in systems with large mammalian predators, as well as the relative importance of density and behavioral mediation, represents a continuing scientific challenge. In Yellowstone today, there is an ongoing effort by various researchers to evaluate ecosystem processes in the park’s two northern ungulate winter ranges: (1) the ‘‘Northern Range’’ along the northern edge of the park (NRC 2002, Barmore 2003) and (2) the ‘‘Upper Gallatin Winter Range’’ along the northwestern corner of the park (Ripple and Beschta 2004b). Previous studies in northern Yellowstone have generally found that elk, in the absence of wolves, caused a decrease in aspen (Populus tremuloides) recruitment (i.e., the growth of seedlings or root sprouts above the browse level of elk). Within this context, Kauffman et al. (2010) initiated a study to provide additional understanding of factors such as elk density, elk behavior, and climate upon historical and contemporary patterns of aspen recruitment in the park’s Northern Range. Like previous studies, Kauffman et al. (2010) concluded that, irrespective of historical climatic conditions, elk have had a major impact on long-term aspen communities after the extirpation of wolves. But, unlike other studies that have seen improvement in the growth or recruitment of young aspen and other browse species in recent years, Kauffman et al. (2010) concluded in their Abstract: ‘‘. . . our estimates of relative survivorship of young browsable aspen indicate that aspen are not currently recovering in Yellowstone, even in the presence of a large wolf population.’’ In the interest of clarifying the potential role of wolves on woody plant community dynamics in Yellowstone’s northern winter ranges, we offer several counterpoints to the conclusions of Kauffman et al. (2010). We do so by readdressing several tasks identified in their Introduction (p. 2744): (1) the history of aspen recruitment failure, (2) contemporary aspen recruitment, and (3) aspen recruitment and predation risk. Task 1 covers the period when wolves were absent from Yellowstone and tasks 2 and 3 focus on the period when wolves were again present. We also include some closing comments regarding trophic cascades and ecosystem recovery. 1. History of aspen recruitment failure.—Although records of wolf and elk populations in northern Yellowstone are fragmentary for the early 1900s, the Northern Range elk population averaged ;10 900 animals (7.3 elk/km; Fig. 1A) as the last wolves were being removed in the mid 1920s. Soon thereafter increased browsing by elk of aspen and other woody species was noted in northern Yellowstone’s winter ranges (e.g., Rush 1932, Lovaas 1970). In an attempt to reduce the effects this large herbivore was having on vegetation, soils, and wildlife habitat in the Northern Manuscript received 13 January 2011; revised 10 June 2011; accepted 20 June 2011. Corresponding Editor: C. C. Wilmers. 1 Department of Forest Ecosystems and Society, Oregon State University, Corvallis, Oregon 97331 USA. 2 E-mail: [email protected]",
"title": ""
},
{
"docid": "1b9a0d3d9ce37601ad348a27cd8ebe60",
"text": "Data mining is becoming strategically important area for many business organizations including banking sector. It is a process of analyzing the data from various perspectives and summarizing it into valuable information. Data mining assists the banks to look for hidden pattern in a group and discover unknown relationship in the data. Today, customers have so many opinions with regard to where they can choose to do their business. Early data analysis techniques were oriented toward extracting quantitative and statistical data characteristics. These techniques facilitate useful data interpretations for the banking sector to avoid customer attrition. Customer retention is the most important factor to be analyzed in today’s competitive business environment. And also fraud is a significant problem in banking sector. Detecting and preventing fraud is difficult, because fraudsters develop new schemes all the time, and the schemes grow more and more sophisticated to elude easy detection. This paper analyzes the data mining techniques and its applications in banking sector like fraud prevention and detection, customer retention, marketing and risk management. Keywords— Banking Sector, Customer Retention, Credit Approval, Data mining, Fraud Detection,",
"title": ""
},
{
"docid": "403cbc725755d1d1886f8dacce157965",
"text": "Software engineering is a human task, and as such we must study what software engineers do and think. Understanding the normative practice of software engineering is the first step toward developing realistic solutions to better facilitate the engineering process. We conducted three studies using several data-gathering approaches to elucidate the patterns by which software engineers (SEs) use and update documentation. Our objective is to more accurately comprehend and model documentation use, usefulness, and maintenance, thus enabling better decision making and tool design by developers and project managers. Our results confirm the widely held belief that SEs typically do not update documentation as timely or completely as software process personnel and managers advocate. However, the results also reveal that out-of-date software documentation remains useful in many circumstances.",
"title": ""
},
{
"docid": "9d37260c493c40523c268f6e54c8b4ea",
"text": "Social collaborative filtering recommender systems extend the traditional user-to-item interaction with explicit user-to-user relationships, thereby allowing for a wider exploration of correlations among users and items, that potentially lead to better recommendations. A number of methods have been proposed in the direction of exploring the social network, either locally (i.e. the vicinity of each user) or globally. In this paper, we propose a novel methodology for collaborative filtering social recommendation that tries to combine the merits of both the aforementioned approaches, based on the soft-clustering of the Friend-of-a-Friend (FoaF) network of each user. This task is accomplished by the non-negative factorization of the adjacency matrix of the FoaF graph, while the edge-centric logic of the factorization algorithm is ameliorated by incorporating more general structural properties of the graph, such as the number of edges and stars, through the introduction of the exponential random graph models. The preliminary results obtained reveal the potential of this idea.",
"title": ""
},
{
"docid": "ecfd63af44fc3e68113a9197ce2e83b6",
"text": "In power substation automation systems (SASs) based on IEC 61850, conventional hardwired process connections are being replaced by switched Ethernet. To ensure system reliability and responsiveness, transmission of critical information required by protection and control tasks must satisfy hard delay constraints at all times. Therefore, delay performance conformance should be taken into consideration during the design phase of an SAS project. In this paper, we propose to study the worst-case delay performance of IEC 61850-9-2 process bus networks, which generally carry non-feedforward traffic patterns, through the combination of measurements and network-calculus-based analysis. As an Ethernet switch supports dedicated interconnections between its multiple interfaces, our proposed approach converts a non-feedforward network into feedforward ones by introducing service models for its individual output interfaces instead of modeling it in its entirety with a single service model. To derive practical delay bounds that can be validated against measurement results, our approach not only constructs traffic models based on the idiosyncrasies of process bus network and switched Ethernet, but also establishes service models of networking devices by taking measurements. Results from our case studies of both feedforward and non-feedforward process bus networks show that the proposed combination of network calculus and measurement-based modeling generates accurate delay bounds for Ethernet-based substation communication networks (SCNs). The proposed approach can thus be adopted by designers and architects to analytically evaluate worst-case delay performance at miscellaneous stages of SAS design.",
"title": ""
},
{
"docid": "296ce1f0dd7bf02c8236fa858bb1957c",
"text": "As many as one in 20 people in Europe and North America have some form of autoimmune disease. These diseases arise in genetically predisposed individuals but require an environmental trigger. Of the many potential environmental factors, infections are the most likely cause. Microbial antigens can induce cross-reactive immune responses against self-antigens, whereas infections can non-specifically enhance their presentation to the immune system. The immune system uses fail-safe mechanisms to suppress infection-associated tissue damage and thus limits autoimmune responses. The association between infection and autoimmune disease has, however, stimulated a debate as to whether such diseases might also be triggered by vaccines. Indeed there are numerous claims and counter claims relating to such a risk. Here we review the mechanisms involved in the induction of autoimmunity and assess the implications for vaccination in human beings.",
"title": ""
},
{
"docid": "a72ca91ab3d89e5918e8e13f98dc4a7d",
"text": "We describe the Lightweight Communications and Marshalling (LCM) library for message passing and data marshalling. The primary goal of LCM is to simplify the development of low-latency message passing systems, especially for real-time robotics research applications.",
"title": ""
},
{
"docid": "31cd031708856490f756d4399d7709d5",
"text": "Inspecting objects in the industry aims to guarantee product quality allowing problems to be corrected and damaged products to be discarded. Inspection is also widely used in railway maintenance, where wagon components need to be checked due to efficiency and safety concerns. In some organizations, hundreds of wagons are inspected visually by a human inspector, which leads to quality issues and safety risks for the inspectors. This paper describes a wagon component inspection approach using Deep Learning techniques to detect a particular damaged component: the shear pad. We compared our approach for convolutional neural networks with the state of art classification methods to distinguish among three shear pads conditions: absent, damaged, and undamaged shear pad. Our results are very encouraging showing empirical evidence that our approach has better performance than other classification techniques.",
"title": ""
},
{
"docid": "19879b108f668f3125e485daf19ab453",
"text": "This paper describes the development of anisotropic conductive films (ACFs) for ultra-fine pitch chip-on-glass (COG) application. In order to have reliable COG using ACF at fine pitch, the number of conductive particles trapped between the bump and substrate pad should be enough and less conductive particle between adjacent bumps. The anisotropic conductive film is double layered structure, in which ACF and NCF layer thickness is optimized, to have as many conductive particle as possible on bump after COG bonding. In ACF layer, non-conductive particles of diameter 1/5 times smaller than the conductive particles are added to prevent an electrical short between the bumps of COG assembly. The conductive particles are naturally insulated by the nonconductive particles even though conductive particles are flowed into and agglomerated in narrow gap between bumps during COG bonding. Also, flow property of the conductive particles is restrained due to nonconductive particles, and results the number of the conductive particles constantly maintained. To ensure the insulation property at 10 /spl mu/m gap, insulating coated conductive particles were used in ACF layer composition. The double-layered ACF using low temperature curable binder system is also effective in reducing the warpage level of COG assembly due to low modulus and low bonding temperature.",
"title": ""
},
{
"docid": "06f8f9cd1ac428008332dba85ec326b8",
"text": "This paper proposes a method for high-quality omnidirectional 3D reconstruction of augmented Manhattan worlds from catadioptric stereo video sequences. In contrast to existing works we do not rely on constructing virtual perspective views, but instead propose to optimize depth jointly in a unified omnidirectional space. Furthermore, we show that plane-based prior models can be applied even though planes in 3D do not project to planes in the omnidirectional domain. Towards this goal, we propose an omnidirectional slanted-plane Markov random field model which relies on plane hypotheses extracted using a novel voting scheme for 3D planes in omnidirectional space. To quantitatively evaluate our method we introduce a dataset which we have captured using our autonomous driving platform AnnieWAY which we equipped with two horizontally aligned catadioptric cameras and a Velodyne HDL-64E laser scanner for precise ground truth depth measurements. As evidenced by our experiments, the proposed method clearly benefits from the unified view and significantly outperforms existing stereo matching techniques both quantitatively and qualitatively. Furthermore, our method is able to reduce noise and the obtained depth maps can be represented very compactly by a small number of image segments and plane parameters.",
"title": ""
},
{
"docid": "5c96222feacb0454d353dcaa1f70fb83",
"text": "Geographically dispersed teams are rarely 100% dispersed. However, by focusing on teams that are either fully dispersed or fully co-located, team research to date has lived on the ends of a spectrum at which relatively few teams may actually work. In this paper, we develop a more robust view of geographic dispersion in teams. Specifically, we focus on the spatialtemporal distances among team members and the configuration of team members across sites (independent of the spatial and temporal distances separating those sites). To better understand the nature of dispersion, we develop a series of five new measures and explore their relationships with communication frequency data from a sample of 182 teams (of varying degrees of dispersion) from a Fortune 500 telecommunications firm. We conclude with recommendations regarding the use of different measures and important questions that they could help address. Geographic Dispersion in Teams 1",
"title": ""
},
{
"docid": "7b880ef0049fbb0ec64b0e5342f840c0",
"text": "The title question was addressed using an energy model that accounts for projected global energy use in all sectors (transportation, heat, and power) of the global economy. Global CO(2) emissions were constrained to achieve stabilization at 400-550 ppm by 2100 at the lowest total system cost (equivalent to perfect CO(2) cap-and-trade regime). For future scenarios where vehicle technology costs were sufficiently competitive to advantage either hydrogen or electric vehicles, increased availability of low-cost, low-CO(2) electricity/hydrogen delayed (but did not prevent) the use of electric/hydrogen-powered vehicles in the model. This occurs when low-CO(2) electricity/hydrogen provides more cost-effective CO(2) mitigation opportunities in the heat and power energy sectors than in transportation. Connections between the sectors leading to this counterintuitive result need consideration in policy and technology planning.",
"title": ""
},
{
"docid": "242a79e9e0d38c5dbd2e87d109566b6e",
"text": "Δ9-Tetrahydrocannabinol (THC) is the main active constituent of cannabis. In recent years, the average THC content of some cannabis cigarettes has increased up to approximately 60 mg per cigarette (20% THC cigarettes). Acute cognitive and psychomotor effects of THC among recreational users after smoking cannabis cigarettes containing such high doses are unknown. The objective of this study was to study the dose–effect relationship between the THC dose contained in cannabis cigarettes and cognitive and psychomotor effects for THC doses up to 69.4 mg (23%). This double-blind, placebo-controlled, randomised, four-way cross-over study included 24 non-daily male cannabis users (two to nine cannabis cigarettes per month). Participants smoked four cannabis cigarettes containing 0, 29.3, 49.1 and 69.4 mg THC on four exposure days. The THC dose in smoked cannabis was linearly associated with a slower response time in all tasks (simple reaction time, visuo-spatial selective attention, sustained attention, divided attention and short-term memory tasks) and motor control impairment in the motor control task. The number of errors increased significantly with increasing doses in the short-term memory and the sustained attention tasks. Some participants showed no impairment in motor control even at THC serum concentrations higher than 40 ng/mL. High feeling and drowsiness differed significantly between treatments. Response time slowed down and motor control worsened, both linearly, with increasing THC doses. Consequently, cannabis with high THC concentrations may be a concern for public health and safety if cannabis smokers are unable to titrate to a high feeling corresponding to a desired plasma THC level.",
"title": ""
},
{
"docid": "17a80da08bd36947909c0d4ab470af95",
"text": "Goal-oriented dialogue systems typically communicate with a backend (e.g. database, Web API) to complete certain tasks to reach a goal. The intents that a dialogue system can recognize are mostly included to the system by the developer statically. For an open dialogue system that can work on more than a small set of well curated data and APIs, this manual intent creation will not scalable. In this paper, we introduce a straightforward methodology for intent creation based on semantic annotation of data and services on the web. With this method, the Natural Language Understanding (NLU) module of a goal-oriented dialogue system can adapt to newly introduced APIs without requiring heavy developer involvement. We were able to extract intents and necessary slots to be filled from schema.org annotations. We were also able to create a set of initial training sentences for classifying user utterances into the generated intents. We demonstrate our approach on the NLU module of a state-of-the art dialogue system development framework.",
"title": ""
},
{
"docid": "0a2a39149013843b0cece63687ebe9e9",
"text": "177Lu-labeled PSMA-617 is a promising new therapeutic agent for radioligand therapy (RLT) of patients with metastatic castration-resistant prostate cancer (mCRPC). Initiated by the German Society of Nuclear Medicine, a retrospective multicenter data analysis was started in 2015 to evaluate efficacy and safety of 177Lu-PSMA-617 in a large cohort of patients.\n\n\nMETHODS\nOne hundred forty-five patients (median age, 73 y; range, 43-88 y) with mCRPC were treated with 177Lu-PSMA-617 in 12 therapy centers between February 2014 and July 2015 with 1-4 therapy cycles and an activity range of 2-8 GBq per cycle. Toxicity was categorized by the common toxicity criteria for adverse events (version 4.0) on the basis of serial blood tests and the attending physician's report. The primary endpoint for efficacy was biochemical response as defined by a prostate-specific antigen decline ≥ 50% from baseline to at least 2 wk after the start of RLT.\n\n\nRESULTS\nA total of 248 therapy cycles were performed in 145 patients. Data for biochemical response in 99 patients as well as data for physician-reported and laboratory-based toxicity in 145 and 121 patients, respectively, were available. The median follow-up was 16 wk (range, 2-30 wk). Nineteen patients died during the observation period. Grade 3-4 hematotoxicity occurred in 18 patients: 10%, 4%, and 3% of the patients experienced anemia, thrombocytopenia, and leukopenia, respectively. Xerostomia occurred in 8%. The overall biochemical response rate was 45% after all therapy cycles, whereas 40% of patients already responded after a single cycle. Elevated alkaline phosphatase and the presence of visceral metastases were negative predictors and the total number of therapy cycles positive predictors of biochemical response.\n\n\nCONCLUSION\nThe present retrospective multicenter study of 177Lu-PSMA-617 RLT demonstrates favorable safety and high efficacy exceeding those of other third-line systemic therapies in mCRPC patients. Future phase II/III studies are warranted to elucidate the survival benefit of this new therapy in patients with mCRPC.",
"title": ""
}
] |
scidocsrr
|
1ea58b29585f867b247e59d9b21b8452
|
Face spoofing detection with highlight removal effect and distortions
|
[
{
"docid": "af3faaf203d771bd7fae3363b8ec8060",
"text": "Recent advances on biometrics, information forensics, and security have improved the accuracy of biometric systems, mainly those based on facial information. However, an ever-growing challenge is the vulnerability of such systems to impostor attacks, in which users without access privileges try to authenticate themselves as valid users. In this work, we present a solution to video-based face spoofing to biometric systems. Such type of attack is characterized by presenting a video of a real user to the biometric system. To the best of our knowledge, this is the first attempt of dealing with video-based face spoofing based in the analysis of global information that is invariant to video content. Our approach takes advantage of noise signatures generated by the recaptured video to distinguish between fake and valid access. To capture the noise and obtain a compact representation, we use the Fourier spectrum followed by the computation of the visual rhythm and extraction of the gray-level co-occurrence matrices, used as feature descriptors. Results show the effectiveness of the proposed approach to distinguish between valid and fake users for video-based spoofing with near-perfect classification results.",
"title": ""
},
{
"docid": "b814aa8f08884ac3c483236ee7533ec4",
"text": "Biometric systems based on face recognition have been shown unreliable under the presence of face-spoofing images. Hence, automatic solutions for spoofing detection became necessary. In this paper, face-spoofing detection is proposed by searching for Moiré patterns due to the overlap of the digital grids. The conditions under which these patterns arise are first described, and their detection is proposed which is based on peak detection in the frequency domain. Experimental results for the algorithm are presented for an image database of facial shots under several conditions.",
"title": ""
},
{
"docid": "fe33ff51ca55bf745bdcdf8ee02e2d36",
"text": "A robust face detection technique along with mouth localization, processing every frame in real time (video rate), is presented. Moreover, it is exploited for motion analysis onsite to verify \"liveness\" as well as to achieve lip reading of digits. A methodological novelty is the suggested quantized angle features (\"quangles\") being designed for illumination invariance without the need for preprocessing (e.g., histogram equalization). This is achieved by using both the gradient direction and the double angle direction (the structure tensor angle), and by ignoring the magnitude of the gradient. Boosting techniques are applied in a quantized feature space. A major benefit is reduced processing time (i.e., that the training of effective cascaded classifiers is feasible in very short time, less than 1 h for data sets of order 104). Scale invariance is implemented through the use of an image scale pyramid. We propose \"liveness\" verification barriers as applications for which a significant amount of computation is avoided when estimating motion. Novel strategies to avert advanced spoofing attempts (e.g., replayed videos which include person utterances) are demonstrated. We present favorable results on face detection for the YALE face test set and competitive results for the CMU-MIT frontal face test set as well as on \"liveness\" verification barriers.",
"title": ""
}
] |
[
{
"docid": "e0d0a0f59f5a894c3674b903c5b7b14c",
"text": "Automated Information Systems has played a major role in the growth, advancement, and modernization of our daily work processes. The main purpose of this paper is to develop a safe and secure web based attendance monitoring system using Biometrics and Radio Frequency Identification (RFID) Technology based on multi-tier architecture, for both computers and smartphones. The system can maintain the attendance records of both students and teachers/staff members of an institution. The system can also detect the current location of the students, faculties, and other staff members anywhere within the domain of institution campus. With the help of android application one can receive live feeds of various campus activities, keep updated with the current topics in his/her enrolled courses as well as track his/her friends on a real time basis. An automated SMS service is facilitated in the system, which sends an SMS automatically to the parents in order to notify that their ward has successfully reached the college. Parents as well as student will be notified via e-mail, if the student is lagging behind in attendance. There is a functionality of automatic attendance performance graph in the system, which gives an idea of the student's consistency in attendance throughout the semester.",
"title": ""
},
{
"docid": "b3c36ea18399de3847bc2509d9600d18",
"text": "ÐHidden Markov models (HMMs) are stochastic models capable of statistical learning and classification. They have been applied in speech recognition and handwriting recognition because of their great adaptability and versatility in handling sequential signals. On the other hand, as these models have a complex structure and also because the involved data sets usually contain uncertainty, it is difficult to analyze the multiple observation training problem without certain assumptions. For many years researchers have used Levinson's training equations in speech and handwriting applications, simply assuming that all observations are independent of each other. This paper presents a formal treatment of HMM multiple observation training without imposing the above assumption. In this treatment, the multiple observation probability is expressed as a combination of individual observation probabilities without losing generality. This combinatorial method gives one more freedom in making different dependence-independence assumptions. By generalizing Baum's auxiliary function into this framework and building up an associated objective function using the Lagrange multiplier method, it is proven that the derived training equations guarantee the maximization of the objective function. Furthermore, we show that Levinson's training equations can be easily derived as a special case in this treatment. Index TermsÐHidden Markov model, forward-backward procedure, Baum-Welch algorithm, multiple observation training.",
"title": ""
},
{
"docid": "fcfebde52c63b9286791476673dc4b70",
"text": "A chat dialogue system, a chatbot, or a conversational agent is a computer program designed to hold a conversation using natural language. Many popular chat dialogue systems are based on handcrafted rules, written in Artificial Intelligence Markup Language (AIML). However, a manual design of rules requires significant efforts, as in practice most chatbots require hundreds if not thousands of rules. This paper presents the method of automated extraction of AIML rules from real Twitter conversation data. Our preliminary experimental results show the possibility of obtaining natural-language conversation between the user and a dialogue system without the necessity of handcrafting its knowledgebase.",
"title": ""
},
{
"docid": "df6f0db99cece9b39a8ea2c227e79478",
"text": "Artificial Intelligence (AI) techniques have been successfully applied to a wide range of problems that perform problem solving such as diagnosis, decision making and optimization problems. However, any AI algorithm applied to a creative problem requires some mechanism to substitute for the creative spark found in the human, as the computer has no creative capacity. Randomness, as supplied by a random number generator, cannot be the sole mechanism to bring about a creative composition. This paper examines three AI approaches applied to music composition. Specifically, the paper introduces MAGMA, a knowledge-based system that uses three different AI algorithms to generate music. Sample songs generated by MAGMA are compared.",
"title": ""
},
{
"docid": "c81214839fbba0bd0e81e904ea9b1d13",
"text": "Short-term traffic forecasting based on deep learning methods, especially long short-term memory (LSTM) neural networks, has received much attention in recent years. However, the potential of deep learning methods in traffic forecasting has not yet fully been exploited in terms of the depth of the model architecture, the spatial scale of the prediction area, and the predictive power of spatial-temporal data. In this paper, a deep stacked bidirectional and unidirectional LSTM (SBU- LSTM) neural network architecture is proposed, which considers both forward and backward dependencies in time series data, to predict network-wide traffic speed. A bidirectional LSTM (BDLSM) layer is exploited to capture spatial features and bidirectional temporal dependencies from historical data. To the best of our knowledge, this is the first time that BDLSTMs have been applied as building blocks for a deep architecture model to measure the backward dependency of traffic data for prediction. The proposed model can handle missing values in input data by using a masking mechanism. Further, this scalable model can predict traffic speed for both freeway and complex urban traffic networks. Comparisons with other classical and state-of-the-art models indicate that the proposed SBU-LSTM neural network achieves superior prediction performance for the whole traffic network in both accuracy and robustness.",
"title": ""
},
{
"docid": "fad2000af9be8c099c0fd88dc341d974",
"text": "The computer technology has emerged as a necessity in our day to day life to deal with various aspects like education, banking, communication, entertainment etc. Computer system’s security is threatened by weapons named as malware to accomplish malicious intention of its writers. Various solutions are available to detect these threats like AV Scanners, Intrusion Detection System, and Firewalls etc. These solutions of malware detection traditionally use signatures of malware to detect their presence in our system. But these methods are also evaded due to some obfuscation techniques employed by malware authors. This survey paper highlights the existing detection and analysis methodologies used for these obfuscated malicious code.",
"title": ""
},
{
"docid": "3deced64cd17210f7e807e686c0221af",
"text": "How should we measure metacognitive (\"type 2\") sensitivity, i.e. the efficacy with which observers' confidence ratings discriminate between their own correct and incorrect stimulus classifications? We argue that currently available methods are inadequate because they are influenced by factors such as response bias and type 1 sensitivity (i.e. ability to distinguish stimuli). Extending the signal detection theory (SDT) approach of Galvin, Podd, Drga, and Whitmore (2003), we propose a method of measuring type 2 sensitivity that is free from these confounds. We call our measure meta-d', which reflects how much information, in signal-to-noise units, is available for metacognition. Applying this novel method in a 2-interval forced choice visual task, we found that subjects' metacognitive sensitivity was close to, but significantly below, optimality. We discuss the theoretical implications of these findings, as well as related computational issues of the method. We also provide free Matlab code for implementing the analysis.",
"title": ""
},
{
"docid": "aa85585deaee26c864178be52f5c3440",
"text": "Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and real-world problems. DE, however, is not completely free from the problems of slow and/or premature convergence. This paper describes a family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member. The idea of small neighborhoods, defined over the index-graph of parameter vectors, draws inspiration from the community of the PSO algorithms. The proposed schemes balance the exploration and exploitation abilities of DE without imposing serious additional burdens in terms of function evaluations. They are shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions. The paper also investigates the applications of the new DE variants to two real-life problems concerning parameter estimation for frequency modulated sound waves and spread spectrum radar poly-phase code design.",
"title": ""
},
{
"docid": "918e7434798ebcfdf075fa93cbffba39",
"text": "Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training. First, using policy gradient and mixed-loss methods for reinforcement learning, we directly optimize sentence-level task-based metrics (as rewards), achieving significant improvements over the baseline, based on both automatic metrics and human evaluation on multiple datasets. Next, we propose a novel entailment-enhanced reward (CIDEnt) that corrects phrase-matching based metrics (such as CIDEr) to only allow for logically-implied partial matches and avoid contradictions, achieving further significant improvements over the CIDEr-reward model. Overall, our CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset.",
"title": ""
},
{
"docid": "4100a10b2a03f3a1ba712901cee406d2",
"text": "Traditionally, many clinicians tend to forego esthetic considerations when full-coverage restorations are indicated for pediatric patients with primary dentitions. However, the availability of new zirconia pediatric crowns and reliable techniques for cementation makes esthetic outcomes practical and consistent when restoring primary dentition. Two cases are described: a 3-year-old boy who presented with severe early childhood caries affecting both anterior and posterior teeth, and a 6-year-old boy who presented with extensive caries of his primary posterior dentition, including a molar requiring full coverage. The parents of both boys were concerned about esthetics, and the extent of decay indicated the need for full-coverage restorations. This led to the boys receiving treatment using a restorative procedure in which the carious teeth were prepared for and restored with esthetic tooth-colored zirconia crowns. In both cases, comfortable function and pleasing esthetics were achieved.",
"title": ""
},
{
"docid": "d9950f75380758d0a0f4fd9d6e885dfd",
"text": "In recent decades, the interactive whiteboard (IWB) has become a relatively common educational tool in Western schools. The IWB is essentially a large touch screen, that enables the user to interact with digital content in ways that are not possible with an ordinary computer-projector-canvas setup. However, the unique possibilities of IWBs are rarely leveraged to enhance teaching and learning beyond the primary school level. This is particularly noticeable in high school physics. We describe how a high school physics teacher learned to use an IWB in a new way, how she planned and implemented a lesson on the topic of orbital motion of planets, and what tensions arose in the process. We used an ethnographic approach to account for the teacher’s and involved students’ perspectives throughout the process of teacher preparation, lesson planning, and the implementation of the lesson. To interpret the data, we used the conceptual framework of activity theory. We found that an entrenched culture of traditional white/blackboard use in physics instruction interferes with more technologically innovative and more student-centered instructional approaches that leverage the IWB’s unique instructional potential. Furthermore, we found that the teacher’s confidence in the mastery of the IWB plays a crucial role in the teacher’s willingness to transfer agency within the lesson to the students.",
"title": ""
},
{
"docid": "9ae370847ec965a3ce9c7636f8d6a726",
"text": "In this paper we present a wearable device for control of home automation systems via hand gestures. This solution has many advantages over traditional home automation interfaces in that it can be used by those with loss of vision, motor skills, and mobility. By combining other sources of context with the pendant we can reduce the number and complexity of gestures while maintaining functionality. As users input gestures, the system can also analyze their movements for pathological tremors. This information can then be used for medical diagnosis, therapy, and emergency services.Currently, the Gesture Pendant can recognize control gestures with an accuracy of 95% and userdefined gestures with an accuracy of 97% It can detect tremors above 2HZ within .1 Hz.",
"title": ""
},
{
"docid": "f6df414f8f61dbdab32be2f05d921cb8",
"text": "The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods, whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.",
"title": ""
},
{
"docid": "bcbba4f99e33ac0daea893e280068304",
"text": "Arterial plasma glucose values throughout a 24-h period average approximately 90 mg/dl, with a maximal concentration usually not exceeding 165 mg/dl such as after meal ingestion1 and remaining above 55 mg/dl such as after exercise2 or a moderate fast (60 h).3 This relative stability contrasts with the situation for other substrates such as glycerol, lactate, free fatty acids, and ketone bodies whose fluctuations are much wider (Table 2.1).4 This narrow range defining normoglycemia is maintained through an intricate regulatory and counterregulatory neuro-hormonal system: A decrement in plasma glucose as little as 20 mg/dl (from 90 to 70 mg/dl) will suppress the release of insulin and will decrease glucose uptake in certain areas in the brain (e.g., hypothalamus where glucose sensors are located); this will activate the sympathetic nervous system and trigger the release of counterregulatory hormones (glucagon, catecholamines, cortisol, and growth hormone).5 All these changes will increase glucose release into plasma and decrease its removal so as to restore normoglycemia. On the other hand, a 10 mg/dl increment in plasma glucose will stimulate insulin release and suppress glucagon secretion to prevent further increments and restore normoglycemia. Glucose in plasma either comes from dietary sources or is either the result of the breakdown of glycogen in liver (glycogenolysis) or the formation of glucose in liver and kidney from other carbons compounds (precursors) such as lactate, pyruvate, amino acids, and glycerol (gluconeogenesis). In humans, glucose removed from plasma may have different fates in different tissues and under different conditions (e.g., postabsorptive vs. postprandial), but the pathways for its disposal are relatively limited. It (1) may be immediately stored as glycogen or (2) may undergo glycolysis, which can be non-oxidative producing pyruvate (which can be reduced to lactate or transaminated to form alanine) or oxidative through conversion to acetyl CoA which is further oxidized through the tricarboxylic acid cycle to form carbon dioxide and water. Non-oxidative glycolysis carbons undergo gluconeogenesis and the newly formed glucose is either stored as glycogen or released back into plasma (Fig. 2.1).",
"title": ""
},
{
"docid": "4584a3a2b0e1cb30ba1976bd564d74b9",
"text": "Deep neural networks (DNNs) have achieved great success, but the applications to mobile devices are limited due to their huge model size and low inference speed. Much effort thus has been devoted to pruning DNNs. Layer-wise neuron pruning methods have shown their effectiveness, which minimize the reconstruction error of linear response with a limited number of neurons in each single layer pruning. In this paper, we propose a new layer-wise neuron pruning approach by minimizing the reconstruction error of nonlinear units, which might be more reasonable since the error before and after activation can change significantly. An iterative optimization procedure combining greedy selection with gradient decent is proposed for single layer pruning. Experimental results on benchmark DNN models show the superiority of the proposed approach. Particularly, for VGGNet, the proposed approach can compress its disk space by 13.6× and bring a speedup of 3.7×; for AlexNet, it can achieve a compression rate of 4.1× and a speedup of 2.2×, respectively.",
"title": ""
},
{
"docid": "caa10e745374970796bdd0039416a29d",
"text": "s: Feature selection methods try to find a subset of the available features to improve the application of a learning algorithm. Many methods are based on searching a feature set that optimizes some evaluation function. On the other side, feature set estimators evaluate features individually. Relief is a well known and good feature set estimator. While being usually faster feature estimators have some disadvantages. Based on Relief ideas, we propose a feature set measure that can be used to evaluate the feature sets in a search process. We show how the proposed measure can help guiding the search process, as well as selecting the most appropriate feature set. The new measure is compared with a consistency measure, and the highly reputed wrapper approach.",
"title": ""
},
{
"docid": "44f920073c5196ba2c5fc98351be12cd",
"text": "Successful deployment of Low power and Lossy Networks (LLNs) requires self-organising, self-configuring, security, and mobility support. However, these characteristics can be exploited to perform security attacks against the Routing Protocol for Low-Power and Lossy Networks (RPL). In this paper, we address the lack of strong identity and security mechanisms in RPL. We first demonstrate by simulation the impact of Sybil-Mobile attack, namely SybM, on RPL with respect to control overhead, packet delivery and energy consumption. Then, we introduce a new Intrusion Detection System (IDS) scheme for RPL, named Trust-based IDS (T-IDS). T-IDS is a distributed, cooperative and hierarchical trust-based IDS, which can detect novel intrusions by comparing network behavior deviations. In T-IDS, each node is considered as monitoring node and collaborates with his peers to detect intrusions and report them to a 6LoWPAN Border Router (6BR). In our solution, we introduced a new timer and minor extensions to RPL messages format to deal with mobility, identity and multicast issues. In addition, each node is equipped with a Trusted Platform Module co-processor to handle identification and off-load security related computation and storage.",
"title": ""
},
{
"docid": "43dfbf378a47cadf6868eb9bac22a4cd",
"text": "Maximum power point tracking (MPPT) techniques are employed in photovoltaic (PV) systems to make full utilization of the PV array output power which depends on solar irradiation and ambient temperature. Among all the MPPT strategies, perturbation and observation (P&O) and hill climbing methods are widely applied in the MPPT controllers due to their simplicity and easy implementation. In this paper, both P&O and hill climbing methods are adopted to implement a grid-connected PV system. Their performance is evaluated and compared through theoretical analysis and digital simulation. P&O MPPT method exhibits fast dynamic performance and well regulated PV output voltage, which is more suitable than hill climbing method for grid-connected PV system.",
"title": ""
},
{
"docid": "788c9479bc5eb1a7bb36bfd774280f45",
"text": "The low-density parity-check (LDPC) codes are used to achieve excellent performance with low encoding and decoding complexity. One major criticism concerning LDPC codes has been their apparent high encoding complexity and memory inefficient nature due to large parity check matrix. More generally, we consider the encoding problem for codes specified by sparse parity-check matrices. We show how to exploit the sparseness of the parity-check matrix to obtain efficient encoders. A new technique for efficient encoding of LDP Codes based on the known concept of approximate lower triangulation (ALT) is introduced. The algorithm computes parity check symbols by solving a set of sparse equations, and the triangular factorization is employed to solve the equations efficiently. The key of the encoding method is to get the systematic approximate lower triangular (SALT) form of the Parity Check Matrix with minimum gap g, because the smaller the gap is, the more efficient encoding will be obtained. The functions are to be coded in MATLAB.",
"title": ""
},
{
"docid": "8cb5659bdbe9d376e2a3b0147264d664",
"text": "Group brainstorming is widely adopted as a design method in the domain of software development. However, existing brainstorming literature has consistently proven group brainstorming to be ineffective under the controlled laboratory settings. Yet, electronic brainstorming systems informed by the results of these prior laboratory studies have failed to gain adoption in the field because of the lack of support for group well-being and member support. Therefore, there is a need to better understand brainstorming in the field. In this work, we seek to understand why and how brainstorming is actually practiced, rather than how brainstorming practices deviate from formal brainstorming rules, by observing brainstorming meetings at Microsoft. The results of this work show that, contrary to the conventional brainstorming practices, software teams at Microsoft engage heavily in the constraint discovery process in their brainstorming meetings. We identified two types of constraints that occur in brainstorming meetings. Functional constraints are requirements and criteria that define the idea space, whereas practical constraints are limitations that prioritize the proposed solutions.",
"title": ""
}
] |
scidocsrr
|
7ea035a6027d6da88d8c33e98560f0b0
|
Classifying NBA Offensive Plays Using Neural Networks
|
[
{
"docid": "f78fcf875104f8bab2fa465c414331c6",
"text": "In this paper, we present a systematic framework for recognizing realistic actions from videos “in the wild”. Such unconstrained videos are abundant in personal collections as well as on the Web. Recognizing action from such videos has not been addressed extensively, primarily due to the tremendous variations that result from camera motion, background clutter, changes in object appearance, and scale, etc. The main challenge is how to extract reliable and informative features from the unconstrained videos. We extract both motion and static features from the videos. Since the raw features of both types are dense yet noisy, we propose strategies to prune these features. We use motion statistics to acquire stable motion features and clean static features. Furthermore, PageRank is used to mine the most informative static features. In order to further construct compact yet discriminative visual vocabularies, a divisive information-theoretic algorithm is employed to group semantically related features. Finally, AdaBoost is chosen to integrate all the heterogeneous yet complementary features for recognition. We have tested the framework on the KTH dataset and our own dataset consisting of 11 categories of actions collected from YouTube and personal videos, and have obtained impressive results for action recognition and action localization.",
"title": ""
}
] |
[
{
"docid": "4718e64540f5b8d7399852fb0e16944a",
"text": "In this paper, we propose a novel extension of the extreme learning machine (ELM) algorithm for single-hidden layer feedforward neural network training that is able to incorporate subspace learning (SL) criteria on the optimization process followed for the calculation of the network's output weights. The proposed graph embedded ELM (GEELM) algorithm is able to naturally exploit both intrinsic and penalty SL criteria that have been (or will be) designed under the graph embedding framework. In addition, we extend the proposed GEELM algorithm in order to be able to exploit SL criteria in arbitrary (even infinite) dimensional ELM spaces. We evaluate the proposed approach on eight standard classification problems and nine publicly available datasets designed for three problems related to human behavior analysis, i.e., the recognition of human face, facial expression, and activity. Experimental results denote the effectiveness of the proposed approach, since it outperforms other ELM-based classification schemes in all the cases.",
"title": ""
},
{
"docid": "3299c32ee123e8c5fb28582e5f3a8455",
"text": "Software defects, commonly known as bugs, present a serious challenge for system reliability and dependability. Once a program failure is observed, the debugging activities to locate the defects are typically nontrivial and time consuming. In this paper, we propose a novel automated approach to pin-point the root-causes of software failures.\n Our proposed approach consists of three steps. The first step is bug prediction, which leverages the existing work on anomaly-based bug detection as exceptional behavior during program execution has been shown to frequently point to the root cause of a software failure. The second step is bug isolation, which eliminates false-positive bug predictions by checking whether the dynamic forward slices of bug predictions lead to the observed program failure. The last step is bug validation, in which the isolated anomalies are validated by dynamically nullifying their effects and observing if the program still fails. The whole bug prediction, isolation and validation process is fully automated and can be implemented with efficient architectural support. Our experiments with 6 programs and 7 bugs, including a real bug in the gcc 2.95.2 compiler, show that our approach is highly effective at isolating only the relevant anomalies. Compared to state-of-art debugging techniques, our proposed approach pinpoints the defect locations more accurately and presents the user with a much smaller code set to analyze.",
"title": ""
},
{
"docid": "97d696fc301d661062e2afb6f2ec8505",
"text": "Despite the great advances made by deep learning in many machine learning problems, there is a relative dearth of deep learning approaches for anomaly detection. Those approaches which do exist involve networks trained to perform a task other than anomaly detection, namely generative models or compression, which are in turn adapted for use in anomaly detection; they are not trained on an anomaly detection based objective. In this paper we introduce a new anomaly detection method—Deep Support Vector Data Description—, which is trained on an anomaly detection based objective. The adaptation to the deep regime necessitates that our neural network and training procedure satisfy certain properties, which we demonstrate theoretically. We show the effectiveness of our method on MNIST and CIFAR-10 image benchmark datasets as well as on the detection of adversarial examples of GTSRB stop signs.",
"title": ""
},
{
"docid": "ca1aeb2730eb11844d0dde46cf15de4e",
"text": "Knowledge of the bio-impedance and its equivalent circuit model at the electrode-electrolyte/tissue interface is important in the application of functional electrical stimulation. Impedance can be used as a merit to evaluate the proximity between electrodes and targeted tissues. Understanding the equivalent circuit parameters of the electrode can further be leveraged to set a safe boundary for stimulus parameters in order not to exceed the water window of electrodes. In this paper, we present an impedance characterization technique and implement a proof-of-concept system using an implantable neural stimulator and an off-the-shelf microcontroller. The proposed technique yields the parameters of the equivalent circuit of an electrode through large signal analysis by injecting a single low-intensity biphasic current stimulus with deliberately inserted inter-pulse delay and by acquiring the transient electrode voltage at three well-specified timings. Using low-intensity stimulus allows the derivation of electrode double layer capacitance since capacitive charge-injection dominates when electrode overpotential is small. Insertion of the inter-pulse delay creates a controlled discharge time to estimate the Faradic resistance. The proposed method has been validated by measuring the impedance of a) an emulated Randles cells made of discrete circuit components and b) a custom-made platinum electrode array in-vitro, and comparing estimated parameters with the results derived from an impedance analyzer. The proposed technique can be integrated into implantable or commercial neural stimulator system at low extra power consumption, low extra-hardware cost, and light computation.",
"title": ""
},
{
"docid": "94b00d09c303d92a44c08fb211c7a8ed",
"text": "Pull-Request (PR) is the primary method for code contributions from thousands of developers in GitHub. To maintain the quality of software projects, PR review is an essential part of distributed software development. Assigning new PRs to appropriate reviewers will make the review process more effective which can reduce the time between the submission of a PR and the actual review of it. However, reviewer assignment is now organized manually in GitHub. To reduce this cost, we propose a reviewer recommender to predict highly relevant reviewers of incoming PRs. Combining information retrieval with social network analyzing, our approach takes full advantage of the textual semantic of PRs and the social relations of developers. We implement an online system to show how the reviewer recommender helps project managers to find potential reviewers from crowds. Our approach can reach a precision of 74% for top-1 recommendation, and a recall of 71% for top-10 recommendation.",
"title": ""
},
{
"docid": "3003d3b353a2e6edf4a9c8008b1be8a0",
"text": "An important issue faced while employing Pyroelectric InfraRed (PIR) sensors in an outdoor Wireless Sensor Network (WSN) deployment for intrusion detection, is that the output of the PIR sensor can, as shown in a recent paper, degenerate into a weak and unpredictable signal when the background temperature is close to that of the intruder. The current paper explores the use of an optical camera as a complementary sensing modality in an outdoor WSN deployment to reliably handle such situations. A combination of backgroundsubtraction and the Lucas-Kanade optical-flow algorithms is used to classify between human and animal in an outdoor environment based on video data.,,The algorithms were developed keeping in mind the need for the camera to act when called upon, as a substitute for the PIR sensor by turning in comparable classification accuracies. All algorithms are implemented on a mote in the case of the PIR sensor array and on an Odroid single-board computer in the case of the optical camera. Three sets of experimental results are presented. The first set shows the optical-camera platform to turn in under supervised learning, high accuracy classification (in excess of 95%) comparable to that of the PIR sensor array. The second set of results correspond to an outdoor WSN deployment over a period of 7 days where similar accuracies are achieved. The final set also corresponds to a single-day outdoor WSN deployment and shows that the optical camera can act as a stand-in for the PIR sensor array when the ambient temperature conditions cause the PIR sensor to perform poorly.",
"title": ""
},
{
"docid": "b2bcf059713aaa9802f9d8e7793106dd",
"text": "A framework is presented for analyzing most of the experimental work performed in software engineering over the past several years. The framework of experimentation consists of four categories corresponding to phases of the experimentation process: definition, planning, operation, and interpretation. A variety of experiments are described within the framework and their contribution to the software engineering discipline is discussed. Some recommendations for the application of the experimental process in software engineering are included.",
"title": ""
},
{
"docid": "1ada0fc6b22bba07d9baf4ccab437671",
"text": "Tree-based path planners have been shown to be well suited to solve various high dimensional motion planning problems. Here we present a variant of the Rapidly-Exploring Random Tree (RRT) path planning algorithm that is able to explore narrow passages or difficult areas more effectively. We show that both workspace obstacle information and C-space information can be used when deciding which direction to grow. The method includes many ways to grow the tree, some taking into account the obstacles in the environment. This planner works best in difficult areas when planning for free flying rigid or articulated robots. Indeed, whereas the standard RRT can face difficulties planning in a narrow passage, the tree based planner presented here works best in these areas",
"title": ""
},
{
"docid": "2f778cc324101f5b7d1c9349e181e088",
"text": "Business Intelligence (BI) refers to technologies, tools, and practices for collecting, integrating, analyzing, and presenting large volumes of information to enable better decision making. Today's BI architecture typically consists of a data warehouse (or one or more data marts), which consolidates data from several operational databases, and serves a variety of front-end querying, reporting, and analytic tools. The back-end of the architecture is a data integration pipeline for populating the data warehouse by extracting data from distributed and usually heterogeneous operational sources; cleansing, integrating and transforming the data; and loading it into the data warehouse. Since BI systems have been used primarily for off-line, strategic decision making, the traditional data integration pipeline is a oneway, batch process, usually implemented by extract-transform-load (ETL) tools. The design and implementation of the ETL pipeline is largely a labor-intensive activity, and typically consumes a large fraction of the effort in data warehousing projects. Increasingly, as enterprises become more automated, data-driven, and real-time, the BI architecture is evolving to support operational decision making. This imposes additional requirements and tradeoffs, resulting in even more complexity in the design of data integration flows. These include reducing the latency so that near real-time data can be delivered to the data warehouse, extracting information from a wider variety of data sources, extending the rigidly serial ETL pipeline to more general data flows, and considering alternative physical implementations. We describe the requirements for data integration flows in this next generation of operational BI system, the limitations of current technologies, the research challenges in meeting these requirements, and a framework for addressing these challenges. The goal is to facilitate the design and implementation of optimal flows to meet business requirements.",
"title": ""
},
{
"docid": "6bc5b5b22bbd38e03657a2b6e4dfb23e",
"text": "I believe the single most important reason why we are so helpless against cyber-attackers is that present systems are not supervisable. This opinion is developed in years spent working on network intrusion detection, both as academic and entrepreneur. I believe we need to start writing software and systems that are supervisable by design; in particular, we should do this for embedded devices. In this paper, I present a personal view on the field of intrusion detection, and conclude with some consideration on software design.",
"title": ""
},
{
"docid": "6e0a6d221c7047d3201ea3021459628f",
"text": "Principal angles between subspaces (PABS) (also called canonical angles) serve as a classical tool in mathematics, statistics, and applications, e.g., data mining. Traditionally, PABS are introduced and used via their cosines. The tangents of PABS have attracted relatively less attention, but are important for analysis of convergence of subspace iterations for eigenvalue problems. We explicitly construct matrices, such that their singular values are equal to the tangents of PABS, using several approaches: orthonormal and nonorthonormal bases for subspaces, and orthogonal projectors. Cornell University Library This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2012 201 Broadway, Cambridge, Massachusetts 02139",
"title": ""
},
{
"docid": "a9868eeca8a2b94c7bfe2e9bf880645d",
"text": "UNLABELLED\nPart 1 of this two-part series (presented in the June issue of IJSPT) provided an introduction to functional movement screening, as well as the history, background, and a summary of the evidence regarding the reliability of the Functional Movement Screen (FMS™). Part 1 presented three of the seven fundamental movement patterns that comprise the FMS™, and the specific ordinal grading system from 0-3, used in the their scoring. Specifics for scoring each test are presented. Part 2 of this series provides a review of the concepts associated with the analysis of fundamental movement as a screening system for functional movement competency. In addition, the four remaining movements of the FMS™, which complement those described in Part 1, will be presented (to complete the total of seven fundamental movements): Shoulder Mobility, the Active Straight Leg Raise, the Trunk Stability Push-up, and Rotary Stability. The final four patterns are described in detail, and the specifics for scoring each test are presented, as well as the proposed clinical implications for receiving a grade less than a perfect \"3\". The intent of this two part series is to present the concepts associated with screening of fundamental movements, whether it is the FMS™ system or a different system devised by another clinician. Such a fundamental screen of the movement system should be incorporated into pre-participation screening and return to sport testing in order to determine whether an athlete has the essential movements needed to participate in sports activities at a level of minimum competency. Part 2 concludes with a discussion of the evidence related to functional movement screening, myths related to the FMS™, the future of functional movement screening, and the concept of movement as a system.\n\n\nLEVEL OF EVIDENCE\n5.",
"title": ""
},
{
"docid": "abde419c67119fa9d16f365262d39b34",
"text": "Silicon nitride is the most commonly used passivation layer in biosensor applications where electronic components must be interfaced with ionic solutions. Unfortunately, the predominant method for functionalizing silicon nitride surfaces, silane chemistry, suffers from a lack of reproducibility. As an alternative, we have developed a silane-free pathway that allows for the direct functionalization of silicon nitride through the creation of primary amines formed by exposure to a radio frequency glow discharge plasma fed with humidified air. The aminated surfaces can then be further functionalized by a variety of methods; here we demonstrate using glutaraldehyde as a bifunctional linker to attach a robust NeutrAvidin (NA) protein layer. Optimal amine formation, based on plasma exposure time, was determined by labeling treated surfaces with an amine-specific fluorinated probe and characterizing the coverage using X-ray photoelectron spectroscopy (XPS). XPS and radiolabeling studies also reveal that plasma-modified surfaces, as compared with silane-modified surfaces, result in similar NA surface coverage, but notably better reproducibility.",
"title": ""
},
{
"docid": "5a2c1c1362b543a1da3fe4d3e786a368",
"text": "We describe a fully automated system for the classification of acral volar melanomas. We used a total of 213 acral dermoscopy images (176 nevi and 37 melanomas). Our automatic tumor area extraction algorithm successfully extracted the tumor in 199 cases (169 nevi and 30 melanomas), and we developed a diagnostic classifier using these images. Our linear classifier achieved a sensitivity (SE) of 100%, a specificity (SP) of 95.9%, and an area under the receiver operating characteristic curve (AUC) of 0.993 using a leave-one-out cross-validation strategy (81.1% SE, 92.1% SP; considering 14 unsuccessful extraction cases as false classification). In addition, we developed three pattern detectors for typical dermoscopic structures such as parallel ridge, parallel furrow, and fibrillar patterns. These also achieved good detection accuracy as indicated by their AUC values: 0.985, 0.931, and 0.890, respectively. The features used in the melanoma-nevus classifier and the parallel ridge detector have significant overlap.",
"title": ""
},
{
"docid": "f7e773113b9006256ab51d975c8f53c5",
"text": "Received 12/4/2013 Accepted 19/6/2013 (006063) 1 Laboratorio Integral de Investigación en Alimentos – LIIA, Instituto Tecnológico de Tepic – ITT, Av. Tecnológico, 2595, CP 63175, Tepic, Nayarit, México, e-mail: [email protected] 2 Dirección General de Innovación Tecnológica, Centro de Excelencia, Universidad Autónoma de Tamaulipas – UAT, Ciudad Victoria, Tamaulipas, México 3 Centro de Investigación en Ciencia Aplicada y Tecnología Avanzada – CICATA, Instituto Politécnico Nacional – IPN, Querétaro, Querétaro, México *Corresponding author Effect of high hydrostatic pressure on antioxidant content of ‘Ataulfo’ mango during postharvest maturation Viviana Guadalupe ORTEGA1, José Alberto RAMÍREZ2, Gonzalo VELÁZQUEZ3, Beatriz TOVAR1, Miguel MATA1, Efigenia MONTALVO1*",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "f2c846f200d9c59362bf285b2b68e2cd",
"text": "A Root Cause Failure Analysis (RCFA) for repeated impeller blade failures in a five stage centrifugal propane compressor is described. The initial failure occurred in June 2007 with a large crack found in one blade on the third impeller and two large pieces released from adjacent blades on the fourth impeller. An RCFA was performed to determine the cause of the failures. The failure mechanism was identified to be high cycle fatigue. Several potential causes related to the design, manufacture, and operation of the compressor were examined. The RCFA concluded that the design and manufacture were sound and there were no conclusive issues with respect to operation. A specific root cause was not identified. In June 2009, a second case of blade cracking occurred with a piece once again released from a single blade on the fourth impeller. Due to the commonality with the previous instance this was identified as a repeat failure. Specifically, both cases had occurred in the same compressor whereas, two compressors operating in identical service in adjacent Liquefied natural Gas (LNG) trains had not encountered the problem. A second RCFA was accordingly launched with the ultimate objective of preventing further repeated failures. Both RCFA teams were established comprising of engineers from the End User (RasGas), the OEM (Elliott Group) and an independent consultancy (Southwest Research Institute). The scope of the current investigation included a detailed metallurgical assessment, impeller modal frequency assessment, steady and unsteady computational fluid dynamics (CFD) assessment, finite element analyses (FEA), fluid structure interaction (FSI) assessment, operating history assessment and a comparison change analysis. By the process of elimination, the most probable causes were found to be associated with: • vane wake excitation of either the impeller blade leading edge modal frequency from severe mistuning and/or unusual response of the 1-diameter cover/blades modal frequency • mist carry over from third side load upstream scrubber • end of curve operation in the compressor rear section INTRODUCTION RasGas currently operates seven LNG trains at Ras Laffan Industrial City, Qatar. Train 3 was commissioned in 2004 with a nameplate LNG production of 4.7 Mtpa which corresponds to a wet sour gas feed of 790 MMscfd (22.37 MMscmd). Trains 4 and 5 were later commissioned in 2005 and 2006 respectively. They were also designed for a production 4.7 Mtpa LNG but have higher wet sour gas feed rates of 850 MMscfd (24.05 MMscmd). Despite these differences, the rated operation of the propane compressor is identical in each train. Figure 1. APCI C3-MR Refrigeration system for Trains 3, 4 and 5 The APCI C3-MR refrigeration cycle (Roberts, et al. 2002), depicted in Figure 1 is common for all three trains. Propane is circulated in a continuous loop between four compressor inlets and a single discharge. The compressed discharge gas is cooled and condensed in three sea water cooled heat exchangers before being routed to the LLP, LP, MP and HP evaporators. Here, the liquid propane is evaporated by the transfer of heat from the warmer feed and MR gas streams. It finally passes through one of the four suction scrubbers before re-entering the compressor as a gas. Although not shown, each section inlet has a dedicated anti-surge control loop from the de-superheater discharge to the suction scrubber inlet. A cross section of the propane compressor casing and rotor is illustrated in Figure 2. It is a straight through centrifugal unit with a horizontally split casing. Five impellers are mounted upon the 21.3 ft (6.5 m) long shaft. Three side loads add gas upstream of the suction at impellers 2, 3 & 4. The impellers are of two piece construction, with each piece fabricated from AISI 4340 forgings that were heat treated such that the material has sufficient strength and toughness for operation at temperatures down to -50F (-45.5C). The blades are milled to the hub piece and the cover piece was welded to the blades using a robotic metal inert gas (MIG) welding process. The impellers are mounted to the shaft with an interference fit. The thrust disc is mounted to the shaft with a line on line fit and antirotation key. The return channel and side load inlets are all vaned to align the downstream swirl angle. The impeller diffusers are all vaneless. A summary of the relevant compressor design parameters is given in Table 1. The complete compressor string is also depicted in Figure 1. The propane compressor is coupled directly to the HP MR compressor and driven by a GE Frame 7EA gas turbine and ABB 16086 HP (12 MW) helper motor at 3600 rpm rated shaft speed. Table 1. Propane Compressor design parameters Component Material No of",
"title": ""
},
{
"docid": "6cb2e41787378eca0dbbc892f46274e5",
"text": "Both reviews and user-item interactions (i.e., rating scores) have been widely adopted for user rating prediction. However, these existing techniques mainly extract the latent representations for users and items in an independent and static manner. That is, a single static feature vector is derived to encode user preference without considering the particular characteristics of each candidate item. We argue that this static encoding scheme is incapable of fully capturing users’ preferences, because users usually exhibit different preferences when interacting with different items. In this article, we propose a novel context-aware user-item representation learning model for rating prediction, named CARL. CARL derives a joint representation for a given user-item pair based on their individual latent features and latent feature interactions. Then, CARL adopts Factorization Machines to further model higher order feature interactions on the basis of the user-item pair for rating prediction. Specifically, two separate learning components are devised in CARL to exploit review data and interaction data, respectively: review-based feature learning and interaction-based feature learning. In the review-based learning component, with convolution operations and attention mechanism, the pair-based relevant features for the given user-item pair are extracted by jointly considering their corresponding reviews. However, these features are only reivew-driven and may not be comprehensive. Hence, an interaction-based learning component further extracts complementary features from interaction data alone, also on the basis of user-item pairs. The final rating score is then derived with a dynamic linear fusion mechanism. Experiments on seven real-world datasets show that CARL achieves significantly better rating prediction accuracy than existing state-of-the-art alternatives. Also, with the attention mechanism, we show that the pair-based relevant information (i.e., context-aware information) in reviews can be highlighted to interpret the rating prediction for different user-item pairs.",
"title": ""
},
{
"docid": "56dabbcf36d734211acc0b4a53f23255",
"text": "Cloud computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. It extends Information Technology’s (IT) existing capabilities. In the last few years, cloud computing has grown from being a promising business concept to one of the fast growing segments of the IT industry. But as more and more information on individuals and companies are placed in the cloud, concerns are beginning to grow about just how safe an environment it is. Despite of all the hype surrounding the cloud, enterprise customers are still reluctant to deploy their business in the cloud. Security is one of the major issues which reduces the growth of cloud computing and complications with data privacy and data protection continue to plague the market. The advent of an advanced model should not negotiate with the required functionalities and capabilities present in the current model. A new model targeting at improving features of an existing model must not risk or threaten other important features of the current model. The architecture of cloud poses such a threat to the security of the existing technologies when deployed in a cloud environment. Cloud service users need to be vigilant in understanding the risks of data breaches in this new environment. In this paper, a survey of the different security risks that pose a threat to the cloud is presented. This paper is a survey more specific to the different security issues that has emanated due to the nature of the service delivery models of a cloud computing system. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
e9eaff331781424aff629e8a7058c844
|
Impact of Frequency Ramp Nonlinearity, Phase Noise, and SNR on FMCW Radar Accuracy
|
[
{
"docid": "663d3d4b0b1d2ede7a85b2101f6102de",
"text": "In this paper, a chirp z-transform (CZT)-based algorithm for frequency-modulated continuous wave (FMCW) radar applications is presented. The proposed algorithm is optimized for real-time implementation in field-programmable gate arrays. To achieve a very high accuracy, the FMCW radar uses an additional phase evaluation. Therefore, a phase calculation based on the CZT algorithm is derived and compared with a correlation based algorithm. For a better classification of the algorithm, the respective Cramér-Rao bounds are calculated. The performance of the algorithm is shown by the evaluation of different radar measurements with a K-band radar. In the measurements, an accuracy of 5 μm with a mean standard deviation of 774 nm is achieved, which nearly matches the theoretically predicted mean standard deviation of 160 nm.",
"title": ""
}
] |
[
{
"docid": "dd8f969d36d5fe037fdb83cdf4ee450f",
"text": "Electronic commerce (EC) has the potential to improve efficiency and productivity in many areas and has received significant attention in many countries. However, there has been some doubt about the relevance of ecommerce for developing countries. The absence of adequate basic infrastructural, socio-economic, sociocultural, and government ICT strategies have created a significant barrier in the adoption and growth of ecommerce in the Kurdistan region of Iraq. In this paper, the author shows that to understand the adoption and diffusion of ecommerce in Kurdistan, socio-cultural issues like transactional trust and social effect of shopping must be considered. The paper presents and discusses these issues hindering ecommerce adoption in Kurdistan. DOI: 10.4018/jtd.2011040104 48 International Journal of Technology Diffusion, 2(2), 47-59, April-June 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. business organizations in developing countries to gain greater global access and reduce transaction costs (Kraemer et al., 2002; Humphrey et al., 2003). However, previous research has found that developing countries have not derived the expected benefits from ecommerce (Pare, 2002; Humphrey et al., 2003). Consequently, there is still doubt about how ecommerce will actually lead firms in developing countries to new trading opportunities (Humphrey et al., 2003; Vatanasakdakul et al., 2004). The obstacles to reaping the benefits brought about by ecommerce are often underestimated. Accessing the Web is possible only when telephones and PCs are available, but these technologies are still in very scarce supply. In addition to this problem, Internet access is still very costly both in absolute terms and relative to per-capita income in most part of Kurdistan region. While PC prices have fallen dramatically over the last decade, they remain beyond the reach of most individual users and enterprises in Kurdistan. Add to this, the human capital cost of installing, operating, maintaining, training and support, the costs are beyond the means of many enterprises. There are significant disparities in the level of Internet penetration across parts of Kurdistan, which have profound implications for an individual’s ability to participate in ecommerce. Moreover, skilled personnel are often lacking, the transport facilities are poor, and secure payment facilities non-existent in most parts of the region. Other than the insufficient physical infrastructures, the electronic transaction facilities are deficient and the legal and regulatory framework inadequate. Most consumer markets face severe limitations in terms of connectivity, ability to pay, deliveries, willingness to make purchases on the Web, ownership of credit cards, and access to other means of payment for online purchases and accessibility in terms of physical deliveries. Moreover, the low level of economic development and small per-capita incomes, the limited skills base with which to build ecommerce services (Odedra-Straub, 2003). While Kurdistan has abundant cheap labour, there still remains the issue of developing IT literacy and education to ensure the quality and size of the IT workforce. The need to overcome infrastructural bottlenecks in telecommunication, transport system, electronic payment systems, security, standards, skilled workforce and logistics must be addressed, before ecommerce can be considered suitable for this region. The objective of this paper is to examine the barriers hindering ecommerce adoption, focusing on technological infrastructures, socio-economic, socio-cultural and the lack of governmental policies as they relate to Kurdistan region. It seeks to identify and describe these issues that hinder the adoption and diffusion of ecommerce in the region. Kurdistan region of Iraq is just like any other developing country where the infrastructures are not as developed as they are in developed countries of U.S., Europe, or some Asian countries, and these infrastructural limitations are significant impediments to ecommerce adoption and diffusion. The next section briefly presents background information about Kurdistan region of Iraq. A BRIEF BACKGROUND SUMMARY OF KURDISTAN REGION OF IRAQ This section briefly discusses Kurdistan region which form the background to this study. The choice of Kurdistan as the context of this study is motivated by the quest to understand why the region is lacking behind in the adoption of ecommerce. Kurdistan is an autonomous Region of Iraq; it is one of the only regions which have gained official recognition internationally as an autonomous federal entity, with leverages in foreign relations, defense, internal security, investment and governance – a similar setting is Quebec region of Canada. The region continues to view itself as an integral part of a united Iraq but one in which it administers its own affairs. Kurdistan has a regional government (KRG) as well as a functional parliament and bureaucracy. Kurdistan is a parliamentary democracy with 11 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/barriers-hindering-ecommerceadoption/57975?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Computer Science, Security, and Information Technology, InfoSci-Technology Adoption, Ethics, and Human Computer Interaction eJournal Collection, InfoSciJournal Disciplines Business, Administration, and Management, InfoSci-Select. Recommend this product to",
"title": ""
},
{
"docid": "2fc8918896f02d248597b5950fc33857",
"text": "This paper investigates the design and implementation of a finger-like robotic structure capable of reproducing human hand gestural movements performed by a multi-fingered, hand-like structure. In this work, we present a pneumatic circuit and a closed-loop controller for a finger-like soft pneumatic actuator. Experimental results demonstrate the performance of the pneumatic and control systems of the soft pneumatic actuator, and its ability to track human movement trajectories with affective content.",
"title": ""
},
{
"docid": "4667b31c7ee70f7bc3709fc40ec6140f",
"text": "This article presents a method for rectifying and stabilising video from cell-phones with rolling shutter (RS) cameras. Due to size constraints, cell-phone cameras have constant, or near constant focal length, making them an ideal application for calibrated projective geometry. In contrast to previous RS rectification attempts that model distortions in the image plane, we model the 3D rotation of the camera. We parameterise the camera rotation as a continuous curve, with knots distributed across a short frame interval. Curve parameters are found using non-linear least squares over inter-frame correspondences from a KLT tracker. By smoothing a sequence of reference rotations from the estimated curve, we can at a small extra cost, obtain a high-quality image stabilisation. Using synthetic RS sequences with associated ground-truth, we demonstrate that our rectification improves over two other methods. We also compare our video stabilisation with the methods in iMovie and Deshaker.",
"title": ""
},
{
"docid": "c2fe863aba72df9df8405329c36046b6",
"text": "Feature learning for 3D shapes is challenging due to the lack of natural paramterization for 3D surface models. We adopt the multi-view depth image representation and propose Multi-View Deep Extreme Learning Machine (MVD-ELM) to achieve fast and quality projective feature learning for 3D shapes. In contrast to existing multiview learning approaches, our method ensures the feature maps learned for different views are mutually dependent via shared weights and in each layer, their unprojections together form a valid 3D reconstruction of the input 3D shape through using normalized convolution kernels. These lead to a more accurate 3D feature learning as shown by the encouraging results in several applications. Moreover, the 3D reconstruction property enables clear visualization of the learned features, which further demonstrates the meaningfulness of our feature learning.",
"title": ""
},
{
"docid": "b771737351b984881e0fce7f9bb030e8",
"text": "BACKGROUND\nConsidering the high prevalence of dementia, it would be of great value to develop effective tools to improve cognitive function. We examined the effects of a human-type communication robot on cognitive function in elderly women living alone.\n\n\nMATERIAL/METHODS\nIn this study, 34 healthy elderly female volunteers living alone were randomized to living with either a communication robot or a control robot at home for 8 weeks. The shape, voice, and motion features of the communication robot resemble those of a 3-year-old boy, while the control robot was not designed to talk or nod. Before living with the robot and 4 and 8 weeks after living with the robot, experiments were conducted to evaluate a variety of cognitive functions as well as saliva cortisol, sleep, and subjective fatigue, motivation, and healing.\n\n\nRESULTS\nThe Mini-Mental State Examination score, judgement, and verbal memory function were improved after living with the communication robot; those functions were not altered with the control robot. In addition, the saliva cortisol level was decreased, nocturnal sleeping hours tended to increase, and difficulty in maintaining sleep tended to decrease with the communication robot, although alterations were not shown with the control. The proportions of the participants in whom effects on attenuation of fatigue, enhancement of motivation, and healing could be recognized were higher in the communication robot group relative to the control group.\n\n\nCONCLUSIONS\nThis study demonstrates that living with a human-type communication robot may be effective for improving cognitive functions in elderly women living alone.",
"title": ""
},
{
"docid": "ba2029c92fc1e9277e38edff0072ac82",
"text": "Estimation, recognition, and near-future prediction of 3D trajectories based on their two dimensional projections available from one camera source is an exceptionally difficult problem due to uncertainty in the trajectories and environment, high dimensionality of the specific trajectory states, lack of enough labeled data and so on. In this article, we propose a solution to solve this problem based on a novel deep learning model dubbed disjunctive factored four-way conditional restricted Boltzmann machine (DFFW-CRBM). Our method improves state-of-the-art deep learning techniques for high dimensional time-series modeling by introducing a novel tensor factorization capable of driving forth order Boltzmann machines to considerably lower energy levels, at no computational costs. DFFW-CRBMs are capable of accurately estimating, recognizing, and performing near-future prediction of three-dimensional trajectories from their 2D projections while requiring limited amount of labeled data. We evaluate our method on both simulated and real-world data, showing its effectiveness in predicting and classifying complex ball trajectories and human activities.",
"title": ""
},
{
"docid": "b1488b35284b6610d44d178d56cc89eb",
"text": "We introduce an unsupervised discriminative model for the task of retrieving experts in online document collections. We exclusively employ textual evidence and avoid explicit feature engineering by learning distributed word representations in an unsupervised way. We compare our model to state-of-the-art unsupervised statistical vector space and probabilistic generative approaches. Our proposed log-linear model achieves the retrieval performance levels of state-of-the-art document-centric methods with the low inference cost of so-called profile-centric approaches. It yields a statistically significant improved ranking over vector space and generative models in most cases, matching the performance of supervised methods on various benchmarks. That is, by using solely text we can do as well as methods that work with external evidence and/or relevance feedback. A contrastive analysis of rankings produced by discriminative and generative approaches shows that they have complementary strengths due to the ability of the unsupervised discriminative model to perform semantic matching.",
"title": ""
},
{
"docid": "bb2e4f81ea95e54e1e6a266135e8b8ff",
"text": "The nonparametric Wilcoxon Rank Sum (also known as the Mann-Whitney U) and the permutation t-tests are robust with respect to Type I error for departures from population normality, and both are powerful alternatives to the independent samples Student's t-test for detecting shift in location. The question remains regarding their comparative statistical power for small samples, particularly for non-normal distributions. Monte Carlo simulations indicated the rank-based Wilcoxon test was found to be more powerful than both the t and the permutation t-tests.",
"title": ""
},
{
"docid": "18c3d950c4a2394185543a0f08bc1717",
"text": "Prediction is pervasive in human cognition and plays a central role in language comprehension. At an electrophysiological level, this cognitive function contributes substantially in determining the amplitude of the N400. In fact, the amplitude of the N400 to words within a sentence has been shown to depend on how predictable those words are: The more predictable a word, the smaller the N400 elicited. However, predictive processing can be based on different sources of information that allow anticipation of upcoming constituents and integration in context. In this study, we investigated the ERPs elicited during the comprehension of idioms, that is, prefabricated multiword strings stored in semantic memory. When a reader recognizes a string of words as an idiom before the idiom ends, she or he can develop expectations concerning the incoming idiomatic constituents. We hypothesized that the expectations driven by the activation of an idiom might differ from those driven by discourse-based constraints. To this aim, we compared the ERP waveforms elicited by idioms and two literal control conditions. The results showed that, in both cases, the literal conditions exhibited a more negative potential than the idiomatic condition. Our analyses suggest that before idiom recognition the effect is due to modulation of the N400 amplitude, whereas after idiom recognition a P300 for the idiomatic sentence has a fundamental role in the composition of the effect. These results suggest that two distinct predictive mechanisms are at work during language comprehension, based respectively on probabilistic information and on categorical template matching.",
"title": ""
},
{
"docid": "63da0b3d1bc7d6aedd5356b8cdf67b24",
"text": "This paper concentrated on a new application of Deep Neural Network (DNN) approach. The DNN, also widely known as Deep Learning(DL), has been the most popular topic in research community recently. Through the DNN, the original data set can be represented in a new feature space with machine learning algorithms, and intelligence models may have the chance to obtain a better performance in the “learned” feature space. Scientists have achieved encouraging results by employing DNN in some research fields, including Computer Vision, Speech Recognition, Natural Linguistic Programming and Bioinformation Processing. However, as an approach mainly functioned for learning features, DNN is reasonably believed to be a more universal approach: it may have the potential in other data domains and provide better feature spaces for other type of problems. In this paper, we present some initial investigations on applying DNN to deal with the time series problem in meteorology field. In our research, we apply DNN to process the massive weather data involving millions of atmosphere records provided by The Hong Kong Observatory (HKO)1. The obtained features are employed to predict the weather change in the next 24 hours. The results show that the DNN is able to provide a better feature space for weather data sets, and DNN is also a potential tool for the feature fusion of time series problems.",
"title": ""
},
{
"docid": "4b47c2f98ebc8f7b19f90fdf1edcb2ee",
"text": "Prevalent theories about consciousness propose a causal relation between lack of spatial coding and absence of conscious experience: The failure to code the position of an object is assumed to prevent this object from entering consciousness. This is consistent with influential theories of unilateral neglect following brain damage, according to which spatial coding of neglected stimuli is defective, and this would keep their processing at the nonconscious level. Contrary to this view, we report evidence showing that spatial coding and consciousness can dissociate. A patient with left neglect, who was not aware of contralesional stimuli, was able to process their color and position. However, in contrast to (ipsilesional) consciously perceived stimuli, color and position of neglected stimuli were processed separately. We propose that individual object features, including position, can be processed without attention and consciousness and that conscious perception of an object depends on the binding of its features into an integrated percept.",
"title": ""
},
{
"docid": "fd63f9b9454358810a68fc003452509b",
"text": "The years that students spend in college are perhaps the most influential years on the rest of their lives. College students face many different decisions day in and day out that may determine how successful they will be in the future. They will choose majors, whether or not to play a sport, which clubs to join, whether they should join a fraternity or sorority, which classes to take, and how much time to spend studying. It is unclear what aspects of college will benefit a person the most down the road. Are some majors better than others? Is earning a high GPA important? Or will simply getting a degree be enough to make a good living? These are a few of the many questions that college students have.",
"title": ""
},
{
"docid": "3c98c5bd1d9a6916ce5f6257b16c8701",
"text": "As financial time series are inherently noisy and non-stationary, it is regarded as one of the most challenging applications of time series forecasting. Due to the advantages of generalization capability in obtaining a unique solution, support vector regression (SVR) has also been successfully applied in financial time series forecasting. In the modeling of financial time series using SVR, one of the key problems is the inherent high noise. Thus, detecting and removing the noise are important but difficult tasks when building an SVR forecasting model. To alleviate the influence of noise, a two-stage modeling approach using independent component analysis (ICA) and support vector regression is proposed in financial time series forecasting. ICA is a novel statistical signal processing technique that was originally proposed to find the latent source signals from observed mixture signals without having any prior knowledge of the mixing mechanism. The proposed approach first uses ICA to the forecasting variables for generating the independent components (ICs). After identifying and removing the ICs containing the noise, the rest of the ICs are then used to reconstruct the forecasting variables which contain less noise and served as the input variables of the SVR forecasting model. In order to evaluate the performance of the proposed approach, the Nikkei 225 opening index and TAIEX closing index are used as illustrative examples. Experimental results show that the proposed model outperforms the SVR model with non-filtered forecasting variables and a random walk model.",
"title": ""
},
{
"docid": "1c079b53b0967144a183f65a16e10158",
"text": "Android has provided dynamic code loading (DCL) since API level one. DCL allows an app developer to load additional code at runtime. DCL raises numerous challenges with regards to security and accountability analysis of apps. While previous studies have investigated DCL on Android, in this paper we formulate and answer three critical questions that are missing from previous studies: (1) Where does the loaded code come from (remotely fetched or locally packaged), and who is the responsible entity to invoke its functionality? (2) In what ways is DCL utilized to harden mobile apps, specifically, application obfuscation? (3) What are the security risks and implications that can be found from DCL in off-the-shelf apps? We design and implement DYDROID, a system which uses both dynamic and static analysis to analyze dynamically loaded code. Dynamic analysis is used to automatically exercise apps, capture DCL behavior, and intercept the loaded code. Static analysis is used to investigate malicious behavior and privacy leakage in that dynamically loaded code. We have used DYDROID to analyze over 46K apps with little manual intervention, allowing us to conduct a large-scale measurement to investigate five aspects of DCL, such as source identification, malware detection, vulnerability analysis, obfuscation analysis, and privacy tracking analysis. We have several interesting findings. (1) 27 apps are found to violate the content policy of Google Play by executing code downloaded from remote servers. (2) We determine the distribution, pros/cons, and implications of several common obfuscation methods, including DEX encryption/loading. (3) DCL’s stealthiness enables it to be a channel to deploy malware, and we find 87 apps loading malicious binaries which are not detected by existing antivirus tools. (4) We found 14 apps that are vulnerable to code injection attacks due to dynamically loading code which is writable by other apps. (5) DCL is mainly used by third-party SDKs, meaning that app developers may not know what sort of sensitive functionality is injected into their apps.",
"title": ""
},
{
"docid": "f282a0e666a2b2f3f323870fc07217bd",
"text": "The cultivation of pepper has great importance in all regions of Brazil, due to its characteristics of profi tability, especially when the producer and processing industry add value to the product, or its social importance because it employs large numbers of skilled labor. Peppers require monthly temperatures ranging between 21 and 30 °C, with an average of 18 °C. At low temperatures, there is a decrease in germination, wilting of young parts, and slow growth. Plants require adequate level of nitrogen, favoring plants and fruit growth. Most the cultivars require large spacing for adequate growth due to the canopy of the plants. Proper insect, disease, and weed control prolong the harvest of fruits for longer periods, reducing losses. The crop cycle and harvest period are directly affected by weather conditions, incidence of pests and diseases, and cultural practices including adequate fertilization, irrigation, and adoption of phytosanitary control measures. In general for most cultivars, the fi rst harvest starts 90 days after sowing, which can be prolonged for a couple of months depending on the plant physiological condition.",
"title": ""
},
{
"docid": "3fa16d5e442bc4a2398ba746d6aaddfe",
"text": "Although many users create predictable passwords, the extent to which users realize these passwords are predictable is not well understood. We investigate the relationship between users' perceptions of the strength of specific passwords and their actual strength. In this 165-participant online study, we ask participants to rate the comparative security of carefully juxtaposed pairs of passwords, as well as the security and memorability of both existing passwords and common password-creation strategies. Participants had serious misconceptions about the impact of basing passwords on common phrases and including digits and keyboard patterns in passwords. However, in most other cases, participants' perceptions of what characteristics make a password secure were consistent with the performance of current password-cracking tools. We find large variance in participants' understanding of how passwords may be attacked, potentially explaining why users nonetheless make predictable passwords. We conclude with design directions for helping users make better passwords.",
"title": ""
},
{
"docid": "edce0a0d0b594e21271a4116e223f84b",
"text": "Eliciting the preferences of a set of agents over a set of alternatives is a problem of fundamental importance in social choice theory. Prior work on this problem has studied the query complexity of preference elicitation for the unrestricted domain and for the domain of single peaked preferences. In this paper, we consider the domain of single crossing preference profiles and study the query complexity of preference elicitation under various settings. We consider two distinct situations: when an ordering of the voters with respect to which the profile is single crossing is known versus when it is unknown. We also consider different access models: when the votes can be accessed at random, as opposed to when they are coming in a pre-defined sequence. In the sequential access model, we distinguish two cases when the ordering is known: the first is that sequence in which the votes appear is also a single-crossing order, versus when it is not. The main contribution of our work is to provide polynomial time algorithms with low query complexity for preference elicitation in all the above six cases. Further, we show that the query complexities of our algorithms are optimal up to constant factors for all but one of the above six cases. We then present preference elicitation algorithms for profiles which are close to being single crossing under various notions of closeness, for example, single crossing width, minimum number of candidates|voters whose deletion makes a profile single crossing.",
"title": ""
},
{
"docid": "12b15731e6ad4798cca1d04c4217e0e0",
"text": "Bed surface particle size patchiness may play a central role in bedload and morphologic response to changes in sediment supply in gravel-bed rivers. Here we test a 1-D model (from Parker ebook) of bedload transport, surface grain size, and channel profile with two previously published flume studies that documented bed surface response, and specifically patch development, to reduced sediment supply. The model over predicts slope changes and under predicts average bed surface grain size changes because it does not account for patch dynamics. Field studies reported here using painted rocks as tracers show that fine patches and coarse patches may initiate transport at the same stage, but that much greater transport occurs in the finer patches. A theory for patch development should include grain interactions (similar size grains stopping each other, fine ones mobilizing coarse particles), effects of boundary shear stress divergence, and sorting due to cross-stream sloping bed surfaces.",
"title": ""
},
{
"docid": "32f2416b74baa4b35f853c21c75bbf90",
"text": "In recent years, deep neural networks have emerged as a dominant machine learning tool for a wide variety of application domains. However, training a deep neural network requires a large amount of labeled data, which is an expensive process in terms of time, labor and human expertise. Domain adaptation or transfer learning algorithms address this challenge by leveraging labeled data in a different, but related source domain, to develop a model for the target domain. Further, the explosive growth of digital data has posed a fundamental challenge concerning its storage and retrieval. Due to its storage and retrieval efficiency, recent years have witnessed a wide application of hashing in a variety of computer vision applications. In this paper, we first introduce a new dataset, Office-Home, to evaluate domain adaptation algorithms. The dataset contains images of a variety of everyday objects from multiple domains. We then propose a novel deep learning framework that can exploit labeled source data and unlabeled target data to learn informative hash codes, to accurately classify unseen target data. To the best of our knowledge, this is the first research effort to exploit the feature learning capabilities of deep neural networks to learn representative hash codes to address the domain adaptation problem. Our extensive empirical studies on multiple transfer tasks corroborate the usefulness of the framework in learning efficient hash codes which outperform existing competitive baselines for unsupervised domain adaptation.",
"title": ""
},
{
"docid": "3bc6bde3519900055d24b5b8da914843",
"text": "Previous research on audio source separation based on deep neural networks (DNNs) mainly focuses on estimating the magnitude spectrum of target sources and typically, phase of the mixture signal is combined with the estimated magnitude spectra in an ad-hoc way. Although recovering target phase is assumed to be important for the improvement of separation quality, it can be difficult to handle the periodic nature of the phase with the regression approach. Unwrapping phase is one way to eliminate the phase discontinuity, however, it increases the range of value along with the times of unwrapping, making it difficult for DNNs to model. To overcome this difficulty, we propose to treat the phase estimation problem as a classification problem by discretizing phase values and assigning class indices to them. Experimental results show that our classificationbased approach 1) successfully recovers the phase of the target source in the discretized domain, 2) improves signal-todistortion ratio (SDR) over the regression-based approach in both speech enhancement task and music source separation (MSS) task, and 3) outperforms state-of-the-art MSS.",
"title": ""
}
] |
scidocsrr
|
5f79e67677a3e5466f69feb39b76e6dc
|
Superscalar instruction execution in the 21164 Alpha microprocessor
|
[
{
"docid": "874f981d31242b085c794d6085c346ab",
"text": "A new CMOS microprocessor, the Alpha 21164, reaches 1,200 mips/600 MFLOPS (peak performance). This new implementation of the Alpha architecture achieves SPECint92/SPECfp92 performance of 345/505 (estimated). At these performance levels, the Alpha 21164 has delivered the highest performance of any commercially available microprocessor in the world as of January 1995. It contains a quad-issue, superscalar instruction unit; two 64-bit integer execution pipelines; two 64-bit floating-point execution pipelines; and a high-performance memory subsystem with multiprocessor-coherent write-back caches. OVERVIEW OF THE ALPHA 21164 The Alpha 21164 microprocessor is now a product of Digital Semiconductor. The chip is the second completely new microprocessor to implement the Alpha instruction set architecture. It was designed in Digital's 0.5-micrometer (um) complementary metal-oxide semiconductor (CMOS) process. First silicon was powered on in February 1994; the part has been commercially available since January 1995. At SPECint92/SPECfp92 ratings of 345/505 (estimated), the Alpha 21164 achieved new heights of performance. The performance of this new implementation results from aggressive circuit design using the latest 0.5-um CMOS technology and significant architectural improvements over the first Alpha implementation.[1] The chip is designed to operate at 300 MHz, an operating frequency 10 percent faster than the previous implementation (the DECchip 21064 chip) would have if it were scaled into the new 0.5-um CMOS technology.[2] Relative to the previous implementation, the key improvements in machine organization are a doubling of the superscalar dimension to four-way superscalar instruction issue; reduction of many operational latencies, including the latency in the primary data cache; a memory subsystem that does not block other operations after a cache miss; and a large, on-chip, second-level, write-back cache. The 21164 microprocessor implements the Alpha instruction set architecture. It runs existing Alpha programs without modification. It supports a 43-bit virtual address and a 40-bit physical address. The page size is 8 kilobytes (KB). In the following sections, we describe the five functional units of the Alpha 21164 microprocessor and relate some of the design decisions that improved the performance of the microprocessor. First, we give an overview of the chip's internal organization and pipeline layout. Internal Organization Figure 1 shows a block diagram of the chip's five functional units: the instruction unit, the integer function unit, the floating-point unit, the memory unit, and the cache control and bus interface unit (called the C-box). The three on-chip caches are also shown. The instruction cache and data cache are primary, direct-mapped caches. They are backed by the second-level cache, which is a set-associative cache that holds instructions and data. [Figure 1 (Five Functional Units on the Alpha 21164 Microprocessor) is not available in ASCII format.] Alpha 21164 Pipeline The Alpha 21164 pipeline length is 7 stages for integer execution, 9 stages for floating-point execution, and as many as 12 stages for on-chip memory instruction execution. Additional stages are required for off-chip memory instruction execution. Figure 2 depicts the pipeline for integer, floating-point, and memory operations. [Figure 2 (Alpha 21164 Pipeline Stages) is not available in ASCII format.]",
"title": ""
},
{
"docid": "26aee4feb558468d571138cd495f51d3",
"text": "A 300-MHz, custom 64-bit VLSI, second-generation Alpha CPU chip has been developed. The chip was designed in a 0.5-um CMOS technology using four levels of metal. The die size is 16.5 mm by 18.1 mm, contains 9.3 million transistors, operates at 3.3 V, and supports 3.3-V/5.0-V interfaces. Power dissipation is 50 W. It contains an 8-KB instruction cache; an 8-KB data cache; and a 96-KB unified second-level cache. The chip can issue four instructions per cycle and delivers 1,200 mips/600 MFLOPS (peak). Several noteworthy circuit and implementation techniques were used to attain the target operating frequency.",
"title": ""
}
] |
[
{
"docid": "39a4a7ac64b05811984d2782381314b7",
"text": "Recently there has been a growing concern that many published research findings do not hold up in attempts to replicate them. We argue that this problem may originate from a culture of 'you can publish if you found a significant effect'. This culture creates a systematic bias against the null hypothesis which renders meta-analyses questionable and may even lead to a situation where hypotheses become difficult to falsify. In order to pinpoint the sources of error and possible solutions, we review current scientific practices with regard to their effect on the probability of drawing a false-positive conclusion. We explain why the proportion of published false-positive findings is expected to increase with (i) decreasing sample size, (ii) increasing pursuit of novelty, (iii) various forms of multiple testing and researcher flexibility, and (iv) incorrect P-values, especially due to unaccounted pseudoreplication, i.e. the non-independence of data points (clustered data). We provide examples showing how statistical pitfalls and psychological traps lead to conclusions that are biased and unreliable, and we show how these mistakes can be avoided. Ultimately, we hope to contribute to a culture of 'you can publish if your study is rigorous'. To this end, we highlight promising strategies towards making science more objective. Specifically, we enthusiastically encourage scientists to preregister their studies (including a priori hypotheses and complete analysis plans), to blind observers to treatment groups during data collection and analysis, and unconditionally to report all results. Also, we advocate reallocating some efforts away from seeking novelty and discovery and towards replicating important research findings of one's own and of others for the benefit of the scientific community as a whole. We believe these efforts will be aided by a shift in evaluation criteria away from the current system which values metrics of 'impact' almost exclusively and towards a system which explicitly values indices of scientific rigour.",
"title": ""
},
{
"docid": "3a2456fce98db50aee2d342ef838b349",
"text": "There is a large variety of trackers, which have been proposed in the literature during the last two decades with some mixed success. Object tracking in realistic scenarios is a difficult problem, therefore, it remains a most active area of research in computer vision. A good tracker should perform well in a large number of videos involving illumination changes, occlusion, clutter, camera motion, low contrast, specularities, and at least six more aspects. However, the performance of proposed trackers have been evaluated typically on less than ten videos, or on the special purpose datasets. In this paper, we aim to evaluate trackers systematically and experimentally on 315 video fragments covering above aspects. We selected a set of nineteen trackers to include a wide variety of algorithms often cited in literature, supplemented with trackers appearing in 2010 and 2011 for which the code was publicly available. We demonstrate that trackers can be evaluated objectively by survival curves, Kaplan Meier statistics, and Grubs testing. We find that in the evaluation practice the F-score is as effective as the object tracking accuracy (OTA) score. The analysis under a large variety of circumstances provides objective insight into the strengths and weaknesses of trackers.",
"title": ""
},
{
"docid": "c5081f86c4a173a40175e65b05d9effb",
"text": "Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.",
"title": ""
},
{
"docid": "fb5980237fba3d8b717c95fa390686f7",
"text": "We have demonstrated a technique to create a digitalto-analog converter (DAC) from lever arms and actuator arrays. These DACs take digital electrical signals and produce mechanical displacements at the output. A 3-bit DAC with thermal actuator arrays operating on 5V signals has been demonstrated with 2 states of displacements. This DAC has a least significant bit (LSB) size of 0.74μm, an integral nonlinearity (INL) of ± 0.27LSB, and a differential nonlinearity (DNL) of ± 0.25LSB. The DAC was coupled to a hinged micro mirror to test the dynamic and transient response using a laser setup. Such devices could be used as digitally controlled actuators to drive mechanisms in micro optics, mechanical computing, and micro robotics.",
"title": ""
},
{
"docid": "894e945c9bb27f5464d1b8f119139afc",
"text": "Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimising the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimised for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients the predictive accuracy (quantified by Harrell's C-index) was significantly higher (p = .0012) for our model C=0.75 (95% CI: 0.70 - 0.79) than the human benchmark of C=0.59 (95% CI: 0.53 - 0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival.",
"title": ""
},
{
"docid": "c2332c4484fa18482ef072c003cf2caf",
"text": "The rapid development of smartphone technologies have resulted in the evolution of mobile botnets. The implications of botnets have inspired attention from the academia and the industry alike, which includes vendors, investors, hackers, and researcher community. Above all, the capability of botnets is uncovered through a wide range of malicious activities, such as distributed denial of service (DDoS), theft of business information, remote access, online or click fraud, phishing, malware distribution, spam emails, and building mobile devices for the illegitimate exchange of information and materials. In this study, we investigate mobile botnet attacks by exploring attack vectors and subsequently present a well-defined thematic taxonomy. By identifying the significant parameters from the taxonomy, we compared the effects of existing mobile botnets on commercial platforms as well as open source mobile operating system platforms. The parameters for review include mobile botnet architecture, platform, target audience, vulnerabilities or loopholes, operational impact, and detection approaches. In relation to our findings, research challenges are then presented in this domain.",
"title": ""
},
{
"docid": "be76c3b57724f61f7bfc4cc8b0271170",
"text": "Improving management of information and knowledge in organizations has long been a major objective, but efforts to address it often foundered. Knowledge typically resides in structured documents, informal discussions that may or may not persist online, and in tacit form. Terminology differences and dispersed contextual information hinder efforts to use formal representations. Features of dynamic emerging technologies — unstructured tagging, web-logs, and search — show strong promise in overcoming past obstacles. They exploit digital representations of less formal language and could greatly increase the value of such representations.",
"title": ""
},
{
"docid": "bd11fa646d1c3b17ad17da0a8166a7c2",
"text": "A graph G is called (a, b)-choosable if for any list assignment L that assigns to each vertex v a set L(v ) of a permissible colors, there is a b-tuple L-coloring of G. An (a, 1)-choosable graph is also called achoosable. In the pioneering article on list coloring of graphs by Erdős et al. [2], 2-choosable graphs are characterized. Confirming a special case of a conjecture in [2], Tuza and Voigt [3] proved that 2-choosable graphs are (2m, m)-choosable for any positive integer m. On the other hand, Voigt [6] proved that if m is an odd integer, then these are the only (2m, m)choosable graphs; however, when m is even, there are (2m, m)-choosable graphs that are not 2-choosable. A graph is called 3-choosable-critical if it is not 2-choosable, but all its proper subgraphs are 2-choosable. Voigt conjectured that for every positive integer m, all bipartite 3-choosablecritical graphs are (4m, 2m)-choosable. In this article, we determine which 3-choosable-critical graphs are (4, 2)-choosable, refuting Voigt’s conjecture in the process. Nevertheless, a weaker version of the conjecture is true: we prove that there is an even integer k such that for any positive integer m, every bipartite 3-choosable-critical graph is (2km, km)-choosable. Moving ∗ Contract grant sponsor: CNSF; contract grant number: 11571319. Journal of Graph Theory C © 2016 Wiley Periodicals, Inc. 412 ON (4, 2)-CHOOSABLE GRAPHS 413 beyond 3-choosable-critical graphs, we present an infinite family of non-3choosable-critical graphs that have been shown by computer analysis to be (4, 2)-choosable. This shows that the family of all (4, 2)-choosable graphs has rich structure. C © 2016 Wiley Periodicals, Inc. J. Graph Theory 85: 412–428, 2017",
"title": ""
},
{
"docid": "91e97df8ee68b2aa8219faeba398f20f",
"text": "We propose a method for animating still manga imagery through camera movements. Given a series of existing manga pages, we start by automatically extracting panels, comic characters, and balloons from the manga pages. Then, we use a data-driven graphical model to infer per-panel motion and emotion states from low-level visual patterns. Finally, by combining domain knowledge of film production and characteristics of manga, we simulate camera movements over the manga pages, yielding an animation. The results augment the still manga contents with animated motion that reveals the mood and tension of the story, while maintaining the original narrative. We have tested our method on manga series of different genres, and demonstrated that our method can generate animations that are more effective in storytelling and pacing, with less human efforts, as compared with prior works. We also show two applications of our method, mobile comic reading, and comic trailer generation.",
"title": ""
},
{
"docid": "790de0f792c81b9e26676f800e766759",
"text": "The ubiquity of online fashion shopping demands effective recommendation services for customers. In this paper, we study two types of fashion recommendation: (i) suggesting an item that matches existing components in a set to form a stylish outfit (a collection of fashion items), and (ii) generating an outfit with multimodal (images/text) specifications from a user. To this end, we propose to jointly learn a visual-semantic embedding and the compatibility relationships among fashion items in an end-to-end fashion. More specifically, we consider a fashion outfit to be a sequence (usually from top to bottom and then accessories) and each item in the outfit as a time step. Given the fashion items in an outfit, we train a bidirectional LSTM (Bi-LSTM) model to sequentially predict the next item conditioned on previous ones to learn their compatibility relationships. Further, we learn a visual-semantic space by regressing image features to their semantic representations aiming to inject attribute and category information as a regularization for training the LSTM. The trained network can not only perform the aforementioned recommendations effectively but also predict the compatibility of a given outfit. We conduct extensive experiments on our newly collected Polyvore dataset, and the results provide strong qualitative and quantitative evidence that our framework outperforms alternative methods.",
"title": ""
},
{
"docid": "a7d4272ebc1afefe2a68b86bcb70e757",
"text": "Language production processes can provide insight into how language comprehension works and language typology-why languages tend to have certain characteristics more often than others. Drawing on work in memory retrieval, motor planning, and serial order in action planning, the Production-Distribution-Comprehension (PDC) account links work in the fields of language production, typology, and comprehension: (1) faced with substantial computational burdens of planning and producing utterances, language producers implicitly follow three biases in utterance planning that promote word order choices that reduce these burdens, thereby improving production fluency. (2) These choices, repeated over many utterances and individuals, shape the distributions of utterance forms in language. The claim that language form stems in large degree from producers' attempts to mitigate utterance planning difficulty is contrasted with alternative accounts in which form is driven by language use more broadly, language acquisition processes, or producers' attempts to create language forms that are easily understood by comprehenders. (3) Language perceivers implicitly learn the statistical regularities in their linguistic input, and they use this prior experience to guide comprehension of subsequent language. In particular, they learn to predict the sequential structure of linguistic signals, based on the statistics of previously-encountered input. Thus, key aspects of comprehension behavior are tied to lexico-syntactic statistics in the language, which in turn derive from utterance planning biases promoting production of comparatively easy utterance forms over more difficult ones. This approach contrasts with classic theories in which comprehension behaviors are attributed to innate design features of the language comprehension system and associated working memory. The PDC instead links basic features of comprehension to a different source: production processes that shape language form.",
"title": ""
},
{
"docid": "e4adbb37f365197249d5e0aacb8f27d4",
"text": "Workplace stress can influence healthcare professionals' physical and emotional well-being by curbing their efficiency and having a negative impact on their overall quality of life. The aim of the present study was to investigate the impact that work environment in a local public general hospital can have on the health workers' mental-emotional health and find strategies in order to cope with negative consequences. The study took place from July 2010 to October 2010. Our sample consisted of 200 healthcare professionals aged 21-58 years working in a 240-bed general hospital and the response rate was 91.36%). Our research protocol was first approved by the hospital's review board. A standardized questionnaire that investigates strategies for coping with stressful conditions was used. A standardized questionnaire was used in the present study Coping Strategies for Stressful Events, evaluating the strategies that persons employ in order to overcome a stressful situation or event. The questionnaire was first tested for validity and reliability which were found satisfactory (Cronbach's α=0.862). Strict anonymity of the participants was guaranteed. The SPSS 16.0 software was used for the statistical analysis. Regression analysis showed that health professionals' emotional health can be influenced by strategies for dealing with stressful events, since positive re-assessment, quitting and seeking social support are predisposing factors regarding the three first quality of life factors of the World Health Organization Quality of Life - BREF. More specifically, for the physical health factor, positive re-assessment (t=3.370, P=0.001) and quitting (t=-2.564, P=0.011) are predisposing factors. For the 'mental health and spirituality' regression model, positive re-assessment (t=5.528, P=0.000) and seeking social support (t=-1.991, P=0.048) are also predisposing factors, while regarding social relationships positive re-assessment (t=4.289, P=0.000) is a predisposing factor. According to our findings, there was a notable lack of workplace stress management strategies, which the participants usually perceive as a lack of interest on behalf of the management regarding their emotional state. Some significant factors for lowering workplace stress were found to be the need to encourage and morally reward the staff and also to provide them with opportunities for further or continuous education.",
"title": ""
},
{
"docid": "b9b29130b36fd8962abf121164f58827",
"text": "We introduce a simple and effective method to learn discourse-specific word embeddings (DSWE) for implicit discourse relation recognition. Specifically, DSWE is learned by performing connective classification on massive explicit discourse data, and capable of capturing discourse relationships between words. On the PDTB data set, using DSWE as features achieves significant improvements over baselines.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "e4e97569f53ddde763f4f28559c96ba6",
"text": "With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.",
"title": ""
},
{
"docid": "8914e1a38db6b47f4705f0c684350d38",
"text": "Style transfer is the task of rephrasing the text to contain specific stylistic properties without changing the intent or affect within the context. This paper introduces a new method for automatic style transfer. We first learn a latent representation of the input sentence which is grounded in a language translation model in order to better preserve the meaning of the sentence while reducing stylistic properties. Then adversarial generation techniques are used to make the output match the desired style. We evaluate this technique on three different style transformations: sentiment, gender and political slant. Compared to two state-of-the-art style transfer modeling techniques we show improvements both in automatic evaluation of style transfer and in manual evaluation of meaning preservation and fluency.",
"title": ""
},
{
"docid": "ab33dcd4172dec6cc88e13af867fed88",
"text": "It is necessary to understand the content of articles and user preferences to make effective news recommendations. While ID-based methods, such as collaborative filtering and low-rank factorization, are well known for making recommendations, they are not suitable for news recommendations because candidate articles expire quickly and are replaced with new ones within short spans of time. Word-based methods, which are often used in information retrieval settings, are good candidates in terms of system performance but have issues such as their ability to cope with synonyms and orthographical variants and define \"queries\" from users' historical activities. This paper proposes an embedding-based method to use distributed representations in a three step end-to-end manner: (i) start with distributed representations of articles based on a variant of a denoising autoencoder, (ii) generate user representations by using a recurrent neural network (RNN) with browsing histories as input sequences, and (iii) match and list articles for users based on inner-product operations by taking system performance into consideration. The proposed method performed well in an experimental offline evaluation using past access data on Yahoo! JAPAN's homepage. We implemented it on our actual news distribution system based on these experimental results and compared its online performance with a method that was conventionally incorporated into the system. As a result, the click-through rate (CTR) improved by 23% and the total duration improved by 10%, compared with the conventionally incorporated method. Services that incorporated the method we propose are already open to all users and provide recommendations to over ten million individual users per day who make billions of accesses per month.",
"title": ""
},
{
"docid": "08df6cd44a26be6c4cc96082631a0e6e",
"text": "In the natural habitat of our ancestors, physical activity was not a preventive intervention but a matter of survival. In this hostile environment with scarce food and ubiquitous dangers, human genes were selected to optimize aerobic metabolic pathways and conserve energy for potential future famines.1 Cardiac and vascular functions were continuously challenged by intermittent bouts of high-intensity physical activity and adapted to meet the metabolic demands of the working skeletal muscle under these conditions. When speaking about molecular cardiovascular effects of exercise, we should keep in mind that most of the changes from baseline are probably a return to normal values. The statistical average of physical activity in Western societies is so much below the levels normal for our genetic background that sedentary lifestyle in combination with excess food intake has surpassed smoking as the No. 1 preventable cause of death in the United States.2 Physical activity has been shown to have beneficial effects on glucose metabolism, skeletal muscle function, ventilator muscle strength, bone stability, locomotor coordination, psychological well-being, and other organ functions. However, in the context of this review, we will focus entirely on important molecular effects on the cardiovascular system. The aim of this review is to provide a bird’s-eye view on what is known and unknown about the physiological and biochemical mechanisms involved in mediating exercise-induced cardiovascular effects. The resulting map is surprisingly detailed in some areas (ie, endothelial function), whereas other areas, such as direct cardiac training effects in heart failure, are still incompletely understood. For practical purposes, we have decided to use primarily an anatomic approach to present key data on exercise effects on cardiac and vascular function. For the cardiac effects, the left ventricle and the cardiac valves will be described separately; for the vascular effects, we will follow the arterial vascular tree, addressing changes in the aorta, the large conduit arteries, the resistance vessels, and the microcirculation before turning our attention toward the venous and the pulmonary circulation (Figure 1). Cardiac Effects of Exercise Left Ventricular Myocardium and Ventricular Arrhythmias The maintenance of left ventricular (LV) mass and function depends on regular exercise. Prolonged periods of physical inactivity, as studied in bed rest trials, lead to significant reductions in LV mass and impaired cardiac compliance, resulting in reduced upright stroke volume and orthostatic intolerance.3 In contrast, a group of bed rest subjects randomized to regular supine lower-body negative pressure treadmill exercise showed an increase in LV mass and a preserved LV stoke volume.4 In previously sedentary healthy subjects, a 12-week moderate exercise program induced a mild cardiac hypertrophic response as measured by cardiac magnetic resonance imaging.5 These findings highlight the plasticity of LV mass and function in relation to the current level of physical activity.",
"title": ""
},
{
"docid": "8b7b2f6ac15d8f2f8101355d4de13203",
"text": "Knowledge management capability, customer relationship management, and service quality Shu-Mei Tseng Article information: To cite this document: Shu-Mei Tseng , (2016),\"Knowledge management capability, customer relationship management, and service quality\", Journal of Enterprise Information Management, Vol. 29 Iss 2 pp. Permanent link to this document: http://dx.doi.org/10.1108/JEIM-04-2014-0042",
"title": ""
},
{
"docid": "626cbfd87a6582d36cd1a98342ce2cc2",
"text": "We apply the two-pluyer game assumprio~ls of 1i111ited search horizon and cornn~itnrent to nroves i constant time, to .single-agent heuristic search problems. We present a varicrtion of nrinimcr lookuhead search, and an analog to ulphu-betu pruning rlrot signijicantly improves the efficiency c. the algorithm. Paradoxically. the search horizon reachuble with this algorithm increases wir. increusing branching facior. hl addition. we present a new algorithm, called Real-Time-A ', fo interleaving planning and execution. We prove that the ulgorithm makes locally optimal decision and is guaranteed to find a solution. We also present a learning version of this algorithm thrr improves its performance over successive problen~ solving trials by learning more accurate heuristi values, and prove that the learned values converge to their exact values along every optimal path These algorithms ef/ectively solve significanrly larger problems rhan have previously beerr solvabk using heuristic evaluation functions.",
"title": ""
}
] |
scidocsrr
|
acb41082d9af95270d6f510318b4dedb
|
Wisteria: Nurturing Scalable Data Cleaning Infrastructure
|
[
{
"docid": "d95cd76008dd65d5d7f00c82bad013d3",
"text": "Though data analysis tools continue to improve, analysts still expend an inordinate amount of time and effort manipulating data and assessing data quality issues. Such \"data wrangling\" regularly involves reformatting data values or layout, correcting erroneous or missing values, and integrating multiple data sources. These transforms are often difficult to specify and difficult to reuse across analysis tasks, teams, and tools. In response, we introduce Wrangler, an interactive system for creating data transformations. Wrangler combines direct manipulation of visualized data with automatic inference of relevant transforms, enabling analysts to iteratively explore the space of applicable operations and preview their effects. Wrangler leverages semantic data types (e.g., geographic locations, dates, classification codes) to aid validation and type conversion. Interactive histories support review, refinement, and annotation of transformation scripts. User study results show that Wrangler significantly reduces specification time and promotes the use of robust, auditable transforms instead of manual editing.",
"title": ""
},
{
"docid": "ead461ea8f716f6fab42c08bb7b54728",
"text": "Despite the increasing importance of data quality and the rich theoretical and practical contributions in all aspects of data cleaning, there is no single end-to-end off-the-shelf solution to (semi-)automate the detection and the repairing of violations w.r.t. a set of heterogeneous and ad-hoc quality constraints. In short, there is no commodity platform similar to general purpose DBMSs that can be easily customized and deployed to solve application-specific data quality problems. In this paper, we present NADEEF, an extensible, generalized and easy-to-deploy data cleaning platform. NADEEF distinguishes between a programming interface and a core to achieve generality and extensibility. The programming interface allows the users to specify multiple types of data quality rules, which uniformly define what is wrong with the data and (possibly) how to repair it through writing code that implements predefined classes. We show that the programming interface can be used to express many types of data quality rules beyond the well known CFDs (FDs), MDs and ETL rules. Treating user implemented interfaces as black-boxes, the core provides algorithms to detect errors and to clean data. The core is designed in a way to allow cleaning algorithms to cope with multiple rules holistically, i.e. detecting and repairing data errors without differentiating between various types of rules. We showcase two implementations for core repairing algorithms. These two implementations demonstrate the extensibility of our core, which can also be replaced by other user-provided algorithms. Using real-life data, we experimentally verify the generality, extensibility, and effectiveness of our system.",
"title": ""
},
{
"docid": "c6abeae6e9287f04b472595a47e974ad",
"text": "Data curation is the act of discovering a data source(s) of interest, cleaning and transforming the new data, semantically integrating it with other local data sources, and deduplicating the resulting composite. There has been much research on the various components of curation (especially data integration and deduplication). However, there has been little work on collecting all of the curation components into an integrated end-to-end system. In addition, most of the previous work will not scale to the sizes of problems that we are finding in the field. For example, one web aggregator requires the curation of 80,000 URLs and a second biotech company has the problem of curating 8000 spreadsheets. At this scale, data curation cannot be a manual (human) effort, but must entail machine learning approaches with a human assist only when necessary. This paper describes Data Tamer, an end-to-end curation system we have built at M.I.T. Brandeis, and Qatar Computing Research Institute (QCRI). It expects as input a sequence of data sources to add to a composite being constructed over time. A new source is subjected to machine learning algorithms to perform attribute identification, grouping of attributes into tables, transformation of incoming data and deduplication. When necessary, a human can be asked for guidance. Also, Data Tamer includes a data visualization component so a human can examine a data source at will and specify manual transformations. We have run Data Tamer on three real world enterprise curation problems, and it has been shown to lower curation cost by about 90%, relative to the currently deployed production software. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6th Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.",
"title": ""
}
] |
[
{
"docid": "fa641ed831a7ed681a5284ad0eda3bfa",
"text": "As part of Agile transformation in past few years we have seen IT organizations adopting continuous integration principles in their software delivery lifecycle, which has improved the efficiency of development teams. With the time it has been realized that this optimization as part of continuous integration - alone - is just not helping to make the entire delivery lifecycle efficient or is not driving the organization efficiency. Unless all the pieces of a software delivery lifecycle work like a well oiled machine - efficiency of organization to optimize the delivery lifecycle can not be met. This is the problem which DevOps tries to address. This paper tries to cover all aspects of Devops applicable to various phases of SDLC and specifically talks about business need, ways to move from continuous integration to continuous delivery and its benefits. Continuous delivery transformation in this paper is explained with a real life case study that how infrastructure can be maintained just in form of code (IAAC). Finally this paper touches upon various considerations one must evaluate before adopting DevOps and what kind of benefits one can expect.",
"title": ""
},
{
"docid": "ca1f929b7695f004313335bcef53ac1d",
"text": "Shading systems have the potential to reduce energy consumption of electric lighting and improve visual comfort. Various automated control systems of shading device and electric lighting have been widely used. However, existing lighting and shading systems typically operate independently, i.e., information is not shared, and thus system performance is not optimal. Therefore, integrated control systems have been proposed to maximize energy efficiency and user comfort. Some problems are still unaddressed. For example, the benefits of sharing control information (e.g., HVAC state and occupancy information) between the lighting and shading control systems have not been quantified; the benefits of integrated controls have not been quantified. To address these issues, improved independent and integrated control strategies were proposed by adding shared HVAC state and occupancy information. To provide a quantitative comparison of these control strategies, a co-simulation platform consisting of BCVTB, EnergyPlus and Matlab was developed to perform an in-depth quantitative study of seven control strategies (manual, independent and integrated control strategies). Simulation results for a reference office building were presented for three climate zones (Baltimore, London, Abu-Dhabi), two types of blinds (interior, exterior) and two window-to-wall ratios (66%, 100%). A dynamic occupancy model was developed from actual office occupancy data and used in the simulations. The lighting, heating and cooling energy consumption, electric demand and visual comfort of different control strategies were evaluated and compared. Overall, in most cases, integrated lighting and daylight control outperforms all other strategies in energy and visual comfort performance. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "15a0898247365fa5ff29fd54560f547d",
"text": "SemEval 2018 Task 7 focuses on relation extraction and classification in scientific literature. In this work, we present our tree-based LSTM network for this shared task. Our approach placed 9th (of 28) for subtask 1.1 (relation classification), and 5th (of 20) for subtask 1.2 (relation classification with noisy entities). We also provide an ablation study of features included as input to the network.",
"title": ""
},
{
"docid": "52064068aed0e6fc1a12d05d61d035b4",
"text": "Wireless power transfer systems with multiple transmitters promise advantages of higher transfer efficiencies and focusing effects over single-transmitter systems. From the standard formulation, straightforward maximization of the power transfer efficiency is not trivial. By reformulating the problem, a convex optimization problem emerges, which can be solved efficiently. Further, using Lagrangian duality theory, analytical results are found for the achievable maximum power transfer efficiency and all parameters involved. With these closed-form results, planar and coaxial wireless power transfer setups are investigated.",
"title": ""
},
{
"docid": "138ada76eb85092ec527e1265bffa36b",
"text": "Web service discovery is becoming a challenging and time consuming task due to large number of Web services available on the Internet. Organizing the Web services into functionally similar clusters is one of a very efficient approach for reducing the search space. However, similarity calculation methods that are used in current approaches such as string-based, corpus-based, knowledge-based and hybrid methods have problems that include discovering semantic characteristics, loss of semantic information, encoding fine-grained information and shortage of high-quality ontologies. Because of these issues, the approaches couldn't identify the correct clusters for some services and placed them in wrong clusters. As a result of this, cluster performance is reduced. This paper proposes post-filtering approach to increase precision by rearranging services incorrectly clustered. Our approach uses context aware method that learns term similarity by machine learning under domain context. Experimental results show that our post-filtering approach works efficiently.",
"title": ""
},
{
"docid": "2445b8d7618c051acd743f65ef6f588a",
"text": "Recent developments in analysis methods on the non-linear and non-stationary data have received large attention by the image analysts. In 1998, Huang introduced the empirical mode decomposition (EMD) in signal processing. The EMD approach, fully unsupervised, proved reliable monodimensional (seismic and biomedical) signals. The main contribution of our approach is to apply the EMD to texture extraction and image filtering, which are widely recognized as a difficult and challenging computer vision problem. We developed an algorithm based on bidimensional empirical mode decomposition (BEMD) to extract features at multiple scales or spatial frequencies. These features, called intrinsic mode functions, are extracted by a sifting process. The bidimensional sifting process is realized using morphological operators to detect regional maxima and thanks to radial basis function for surface interpolation. The performance of the texture extraction algorithms, using BEMD method, is demonstrated in the experiment with both synthetic and natural images. q 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "525a819d97e84862d4190b1e0aa4acc0",
"text": "HELIOS2014 is a 2D soccer simulation team which has been participating in the RoboCup competition since 2000. We recently focus on an online multiagent planning using tree search methodology. This paper describes the overview of our search framework and an evaluation method to select the best action sequence.",
"title": ""
},
{
"docid": "2f8635d4da12fd6d161c7b10c140f8f9",
"text": "Technology has made navigation in 3D real time possible and this has made possible what seemed impossible. This paper explores the aspect of deep visual odometry methods for mobile robots. Visual odometry has been instrumental in making this navigation successful. Noticeable challenges in mobile robots including the inability to attain Simultaneous Localization and Mapping have been solved by visual odometry through its cameras which are suitable for human environments. More intuitive, precise and accurate detection have been made possible by visual odometry in mobile robots. Another challenge in the mobile robot world is the 3D map reconstruction for exploration. A dense map in mobile robots can facilitate for localization and more accurate findings. I. VISUAL ODOMETRY IN MOBILE ROBOTS Mobile robot applications heavily rely on the ability of the vehicle to achieve accurate localization. It is essential that a robot is able to maintain knowledge about its position at all times in order to achieve autonomous navigation. To attain this, various techniques, systems and sensors have been established to aid with mobile robot positioning including visual odometry [1]. Importantly, the adoption of Deep Learning based techniques was inspired by the precision to find solutions to numerous standard computer vision problems including object detection, image classification and segmentation. Visual odometry involves the pose estimation process that involves a robot and how they use a stream of images obtained from cameras that are attached to them [2]. The main aim of visual odometry is the estimations from camera pose. It is an approach that avoids contact with the robot for the purpose of ensuring that the mobile robots are effectively positioned. For this reason, the process is quite a challenging task that is related to mapping and simultaneous localization whose main aim is to generate the road map from a stream of visual data [3]. Estimates of motion from pixel differences and features between frames are made based on cameras that are strategically positioned. For mobile robots to achieve an actively controlled navigation, a real time 3D and reliable localization and reconstruction of functions is an essential prerequisite [4]. Mobile robots have to perform localization and mapping functions simultaneously and this poses a major challenge for them. The Simultaneous Localization and Mapping (SLAM) problem has attracted attention as various studies extensively evaluate it [5]. To solve the SLAM problem, visual odometry has been suggested especially because cameras provide high quality information at a low cost from the sensors that are conducive for human environments [6]. The major advances in computer vision also make possible quite a number of synergistic capabilities including terrain and scene classification, object detection and recognition. Notably, the visual odometry in mobile robot have enabled for more precise, intuitive and accurate detection. Although there has been significant progress in the last decade to bring improvements to passive mobile robots into controllable robots that are active, there are still notable challenges in the effort to achieve this. Particularly, a 3D map reconstruction that is fully dense to facilitate for exploration still remains an unsolved problem. It is only through a dense map that mobile robots can be able to more reliably do localization and ultimately leading to findings that are more accurate [7] [8]. According to Turan ( [9]), it is essential that adoptions of a comprehensive reconstruction on the suitable 3D method for mobile robots be adopted. This can be made possible through the building of a modular fashion including key frame selection, pre-processing, estimates on sparse then dense alignment based pose, shading based 3D and bundle fusion reconstruction [10]. There is also the challenge of the real time precise localization of the mobile robots that are actively controlled. The study by [11], which employed quantitative and qualitative in trajectory estimations sought to find solution to the challenge of precise localization for the endoscopic robot capsule. The data set was general and this was ensured through the fitting of 3 endoscopic cameras in different locations for the purpose of capturing the endoscopic videos [12]. Stomach videos were recorded for 15 minutes and they contained more than 10,000 frames. Through this, the ground truth was served for the 3D reconstruction module maps’ quantitative evaluations [13]. Its findings proposed that the direct SLAM be implemented on a map fusion based method that is non rigid for the mobile robots [14]. Through this method, high accuracy is likely to be achieved for extensive evaluations and conclusions [15]. The industry of mobile robots continues to face numerous challenges majorly because of enabling technology, including perception, artificial intelligence and power sources [16]. Evidently, motors, actuators and gears are essential to the robotic world today. Work is still in progress in the development of soft robotics, artificial muscles and strategies of assembly that are aimed at developing the autonomous robot’s generation in the coming future that are power efficient and multifunctional. There is also the aspect of robots lacing synchrony, calibration and symmetry which serves to increase the photometric error. This challenge maybe addressed by adopting the direct odometry method [17]. Direct sparse odometry has been recommended by various studies since it has been found to reduce the photometric error. This can be associated to the fact that it combines a probabilistic model with joint optimization of model parameters [9]. It has also been found to maintain high levels of consistency especially because it incorporates geometry parameters which also increase accuracy levels [18].",
"title": ""
},
{
"docid": "f33b73bf41e5253fb4b043a117fcd9e2",
"text": "Traditional information systems return answers after a user submits a complete query. Users often feel \"left in the dark\" when they have limited knowledge about the underlying data, and have to use a try-and-see approach for finding information. A recent trend of supporting autocomplete in these systems is a first step towards solving this problem. In this paper, we study a new information-access paradigm, called \"interactive, fuzzy search,\" in which the system searches the underlying data \"on the fly\" as the user types in query keywords. It extends autocomplete interfaces by (1) allowing keywords to appear in multiple attributes (in an arbitrary order) of the underlying data; and (2) finding relevant records that have keywords matching query keywords approximately. This framework allows users to explore data as they type, even in the presence of minor errors. We study research challenges in this framework for large amounts of data. Since each keystroke of the user could invoke a query on the backend, we need efficient algorithms to process each query within milliseconds. We develop various incremental-search algorithms using previously computed and cached results in order to achieve an interactive speed. We have deployed several real prototypes using these techniques. One of them has been deployed to support interactive search on the UC Irvine people directory, which has been used regularly and well received by users due to its friendly interface and high efficiency.",
"title": ""
},
{
"docid": "9780c2d63739b8bf4f5c48f12014f605",
"text": "It has been hypothesized that unexplained infertility may be related to specific personality and coping styles. We studied two groups of women with explained infertility (EIF, n = 63) and unexplained infertility (UIF, n = 42) undergoing an in vitro fertilization (IVF) cycle. Women completed personality and coping style questionnaires prior to the onset of the cycle, and state depression and anxiety scales before and at two additional time points during the cycle. Almost no in-between group differences were found at any of the measured time points in regards to the Minnesota Multiphasic Personality Inventory-2 validity and clinical scales, Illness Cognitions and Life Orientation Test, or for the situational measures. The few differences found suggest a more adaptive, better coping, and functioning defensive system in women with EIF. In conclusion, we did not find any clinically significant personality differences or differences in depression or anxiety levels between women with EIF and UIF during an IVF cycle. Minor differences found are probably a reaction to the ambiguous medical situation with its uncertain prognosis, amplifying certain traits which are not specific to one psychological structure but rather to the common experience shared by the group. The results of this study do not support the possibility that personality traits are involved in the pathophysiology of unexplained infertility.",
"title": ""
},
{
"docid": "33c6a2c96fcb8236c9ce40b2f1770d04",
"text": "Intelligent Personal Assistant (IPA) agents are software agents which assist users in performing specific tasks. They should be able to communicate, cooperate, discuss, and guide people. This paper presents a proposal to add Semantic Web Knowledge to IPA agents. In our solution, the IPA agent has a modular knowledge organization composed by four differentiated areas: (i) the rational area, which adds semantic web knowledge, (ii) the association area, which simplifies building appropriate responses, (iii) the commonsense area, which provides commonsense responses, and (iv) the behavioral area, which allows IPA agents to show empathy. Our main objective is to create more intelligent and more human alike IPA agents, enhancing the current abilities that these software agents provide.",
"title": ""
},
{
"docid": "565f815ef0c1dd5107f053ad39dade20",
"text": "Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.",
"title": ""
},
{
"docid": "0d2fd731787e70bb9a06ab32306cb5e8",
"text": "Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.",
"title": ""
},
{
"docid": "2b3e2570e9ecd86be9300220fa78d63d",
"text": "We evaluate the prediction accuracy of models designed using different classification methods depending on the technique used to select variables, and we study the relationship between the structure of the models and their ability to correctly predict financial failure. We show that a neural network based model using a set of variables selected with a criterion that it is adapted to the network leads to better results than a set chosen with criteria used in the financial literature. We also show that the way in which a set of variables may represent the financial profiles of healthy companies plays a role in Type I error reduction.",
"title": ""
},
{
"docid": "173a3ae90795c129016e3b126d719cb8",
"text": "While existing work on neural architecture search (NAS) tunes hyperparameters in a separate post-processing step, we demonstrate that architectural choices and other hyperparameter settings interact in a way that can render this separation suboptimal. Likewise, we demonstrate that the common practice of using very few epochs during the main NAS and much larger numbers of epochs during a post-processing step is inefficient due to little correlation in the relative rankings for these two training regimes. To combat both of these problems, we propose to use a recent combination of Bayesian optimization and Hyperband for efficient joint neural architecture and hyperparameter search.",
"title": ""
},
{
"docid": "5dc78e62ca88a6a5f253417093e2aa4d",
"text": "This paper surveys the scientific and trade literature on cybersecurity for unmanned aerial vehicles (UAV), concentrating on actual and simulated attacks, and the implications for small UAVs. The review is motivated by the increasing use of small UAVs for inspecting critical infrastructures such as the electric utility transmission and distribution grid, which could be a target for terrorism. The paper presents a modified taxonomy to organize cyber attacks on UAVs and exploiting threats by Attack Vector and Target. It shows that, by Attack Vector, there has been one physical attack and ten remote attacks. By Target, there have been six attacks on GPS (two jamming, four spoofing), two attacks on the control communications stream (a deauthentication attack and a zero-day vulnerabilities attack), and two attacks on data communications stream (two intercepting the data feed, zero executing a video replay attack). The paper also divides and discusses the findings by large or small UAVs, over or under 25 kg, but concentrates on small UAVs. The survey concludes that UAV-related research to counter cybersecurity threats focuses on GPS Jamming and Spoofing, but ignores attacks on the controls and data communications stream. The gap in research on attacks on the data communications stream is concerning, as an operator can see a UAV flying off course due to a control stream attack but has no way of detecting a video replay attack (substitution of a video feed).",
"title": ""
},
{
"docid": "ac0e77985a38a3fc024de8a6f504a98c",
"text": "High-protein, low-carbohydrate (HPLC) diets are common in cats, but their effect on the gut microbiome has been ignored. The present study was conducted to test the effects of dietary protein:carbohydrate ratio on the gut microbiota of growing kittens. Male domestic shorthair kittens were raised by mothers fed moderate-protein, moderate-carbohydrate (MPMC; n 7) or HPLC (n 7) diets, and then weaned at 8 weeks onto the same diet. Fresh faeces were collected at 8, 12 and 16 weeks; DNA was extracted, followed by amplification of the V4–V6 region of the 16S rRNA gene using 454 pyrosequencing. A total of 384 588 sequences (average of 9374 per sample) were generated. Dual hierarchical clustering indicated distinct clustering based on the protein:carbohydrate ratio regardless of age. The protein:carbohydrate ratio affected faecal bacteria. Faecal Actinobacteria were greater (P< 0·05) and Fusobacteria were lower (P< 0·05) in MPMC-fed kittens. Faecal Clostridium, Faecalibacterium, Ruminococcus, Blautia and Eubacterium were greater (P< 0·05) in HPLC-fed kittens, while Dialister, Acidaminococcus, Bifidobacterium, Megasphaera and Mitsuokella were greater (P< 0·05) in MPMC-fed kittens. Principal component analysis of faecal bacteria and blood metabolites and hormones resulted in distinct clusters. Of particular interest was the clustering of blood TAG with faecal Clostridiaceae, Eubacteriaceae, Ruminococcaceae, Fusobacteriaceae and Lachnospiraceae; blood ghrelin with faecal Coriobacteriaceae, Bifidobacteriaceae and Veillonellaceae; and blood glucose, cholesterol and leptin with faecal Lactobacillaceae. The present results demonstrate that the protein:carbohydrate ratio affects the faecal microbiome, and highlight the associations between faecal microbes and circulating hormones and metabolites that may be important in terms of satiety and host metabolism.",
"title": ""
},
{
"docid": "6349e0444220d4a8ea3c34755954a58a",
"text": "We present QuickNet, a fast and accurate network architecture that is both faster and significantly more accurate than other “fast” deep architectures like SqueezeNet. Furthermore, it uses less parameters than previous networks, making it more memory efficient. We do this by making two major modifications to the reference “Darknet” model (Redmon et al, 2015): 1) The use of depthwise separable convolutions and 2) The use of parametric rectified linear units. We make the observation that parametric rectified linear units are computationally equivalent to leaky rectified linear units at test time and the observation that separable convolutions can be interpreted as a compressed Inception network (Chollet, 2016). Using these observations, we derive a network architecture, which we call QuickNet, that is both faster and more accurate than previous models. Our architecture provides at least four major advantages: (1) A smaller model size, which is more tenable on memory constrained systems; (2) A significantly faster network which is more tenable on computationally constrained systems; (3) A high accuracy of 95.7% on the CIFAR-10 Dataset which outperforms all but one result published so far, although we note that our works are orthogonal approaches and can be combined (4) Orthogonality to previous model compression approaches allowing for further speed gains to be realized.",
"title": ""
},
{
"docid": "58359b7b3198504fa2475cc0f20ccc2d",
"text": "OBJECTIVES\nTo review and synthesize the state of research on a variety of meditation practices, including: the specific meditation practices examined; the research designs employed and the conditions and outcomes examined; the efficacy and effectiveness of different meditation practices for the three most studied conditions; the role of effect modifiers on outcomes; and the effects of meditation on physiological and neuropsychological outcomes.\n\n\nDATA SOURCES\nComprehensive searches were conducted in 17 electronic databases of medical and psychological literature up to September 2005. Other sources of potentially relevant studies included hand searches, reference tracking, contact with experts, and gray literature searches.\n\n\nREVIEW METHODS\nA Delphi method was used to develop a set of parameters to describe meditation practices. Included studies were comparative, on any meditation practice, had more than 10 adult participants, provided quantitative data on health-related outcomes, and published in English. Two independent reviewers assessed study relevance, extracted the data and assessed the methodological quality of the studies.\n\n\nRESULTS\nFive broad categories of meditation practices were identified (Mantra meditation, Mindfulness meditation, Yoga, Tai Chi, and Qi Gong). Characterization of the universal or supplemental components of meditation practices was precluded by the theoretical and terminological heterogeneity among practices. Evidence on the state of research in meditation practices was provided in 813 predominantly poor-quality studies. The three most studied conditions were hypertension, other cardiovascular diseases, and substance abuse. Sixty-five intervention studies examined the therapeutic effect of meditation practices for these conditions. Meta-analyses based on low-quality studies and small numbers of hypertensive participants showed that TM(R), Qi Gong and Zen Buddhist meditation significantly reduced blood pressure. Yoga helped reduce stress. Yoga was no better than Mindfulness-based Stress Reduction at reducing anxiety in patients with cardiovascular diseases. No results from substance abuse studies could be combined. The role of effect modifiers in meditation practices has been neglected in the scientific literature. The physiological and neuropsychological effects of meditation practices have been evaluated in 312 poor-quality studies. Meta-analyses of results from 55 studies indicated that some meditation practices produced significant changes in healthy participants.\n\n\nCONCLUSIONS\nMany uncertainties surround the practice of meditation. Scientific research on meditation practices does not appear to have a common theoretical perspective and is characterized by poor methodological quality. Firm conclusions on the effects of meditation practices in healthcare cannot be drawn based on the available evidence. Future research on meditation practices must be more rigorous in the design and execution of studies and in the analysis and reporting of results.",
"title": ""
}
] |
scidocsrr
|
88dd387422b83e6765347f92f1ddf59a
|
L0 sparse graphical modeling
|
[
{
"docid": "2871de581ee0efe242438567ca3a57dd",
"text": "The sparsity which is implicit in MR images is exploited to significantly undersample k-space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain-for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed-sensing, images with a sparse representation can be recovered from randomly undersampled k-space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the l(1) norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin-echo brain imaging and 3D contrast enhanced angiography.",
"title": ""
}
] |
[
{
"docid": "3fc3ea7bb6c5342bcbc9d046b0a2537f",
"text": "We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.",
"title": ""
},
{
"docid": "afa8b1315f051fa6f683f63d58fcc3d4",
"text": "Our opinions and judgments are increasingly shaped by what we read on social media -- whether they be tweets and posts in social networks, blog posts, or review boards. These opinions could be about topics such as consumer products, politics, life style, or celebrities. Understanding how users in a network update opinions based on their neighbor's opinions, as well as what global opinion structure is implied when users iteratively update opinions, is important in the context of viral marketing and information dissemination, as well as targeting messages to users in the network.\n In this paper, we consider the problem of modeling how users update opinions based on their neighbors' opinions. We perform a set of online user studies based on the celebrated conformity experiments of Asch [1]. Our experiments are carefully crafted to derive quantitative insights into developing a model for opinion updates (as opposed to deriving psychological insights). We show that existing and widely studied theoretical models do not explain the entire gamut of experimental observations we make. This leads us to posit a new, nuanced model that we term the BVM. We present preliminary theoretical and simulation results on the convergence and structure of opinions in the entire network when users iteratively update their respective opinions according to the BVM. We show that consensus and polarization of opinions arise naturally in this model under easy to interpret initial conditions on the network.",
"title": ""
},
{
"docid": "74b1a39f88ccce2c2f865f36e5117b51",
"text": "Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bidirectional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the input. The experiment shows that our model outperforms common neural network models (CNN, RNN, Bi-RNN) on a sentiment analysis task. Besides, the analysis of how sequence length influences the RCNN with highway layers shows that our model could learn good representation for the long text.",
"title": ""
},
{
"docid": "c53021193518ebdd7006609463bafbcc",
"text": "BACKGROUND AND OBJECTIVES\nSleep is important to child development, but there is limited understanding of individual developmental patterns of sleep, their underlying determinants, and how these influence health and well-being. This article explores the presence of various sleep patterns in children and their implications for health-related quality of life.\n\n\nMETHODS\nData were collected from the Longitudinal Study of Australian Children. Participants included 2926 young children followed from age 0 to 1 years to age 6 to 7 years. Data on sleep duration were collected every 2 years, and covariates (eg, child sleep problems, maternal education) were assessed at baseline. Growth mixture modeling was used to identify distinct longitudinal patterns of sleep duration and significant covariates. Linear regression examined whether the distinct sleep patterns were significantly associated with health-related quality of life.\n\n\nRESULTS\nThe results identified 4 distinct sleep duration patterns: typical sleepers (40.6%), initially short sleepers (45.2%), poor sleepers (2.5%), and persistent short sleepers (11.6%). Factors such as child sleep problems, child irritability, maternal employment, household financial hardship, and household size distinguished between the trajectories. The results demonstrated that the trajectories had different implications for health-related quality of life. For instance, persistent short sleepers had poorer physical, emotional, and social health than typical sleepers.\n\n\nCONCLUSIONS\nThe results provide a novel insight into the nature of child sleep and the implications of differing sleep patterns for health-related quality of life. The findings could inform the development of effective interventions to promote healthful sleep patterns in children.",
"title": ""
},
{
"docid": "14ba4e49e1f773c8f7bfadf8f08a967e",
"text": "Mounting evidence suggests that acute and chronic stress, especially the stress-induced release of glucocorticoids, induces changes in glutamate neurotransmission in the prefrontal cortex and the hippocampus, thereby influencing some aspects of cognitive processing. In addition, dysfunction of glutamatergic neurotransmission is increasingly considered to be a core feature of stress-related mental illnesses. Recent studies have shed light on the mechanisms by which stress and glucocorticoids affect glutamate transmission, including effects on glutamate release, glutamate receptors and glutamate clearance and metabolism. This new understanding provides insights into normal brain functioning, as well as the pathophysiology and potential new treatments of stress-related neuropsychiatric disorders.",
"title": ""
},
{
"docid": "6e666fdd26ea00a6eebf7359bdf82329",
"text": "Kernel-level attacks or rootkits can compromise the security of an operating system by executing with the privilege of the kernel. Current approaches use virtualization to gain higher privilege over these attacks, and isolate security tools from the untrusted guest VM by moving them out and placing them in a separate trusted VM. Although out-of-VM isolation can help ensure security, the added overhead of world-switches between the guest VMs for each invocation of the monitor makes this approach unsuitable for many applications, especially fine-grained monitoring. In this paper, we present Secure In-VM Monitoring (SIM), a general-purpose framework that enables security monitoring applications to be placed back in the untrusted guest VM for efficiency without sacrificing the security guarantees provided by running them outside of the VM. We utilize contemporary hardware memory protection and hardware virtualization features available in recent processors to create a hypervisor protected address space where a monitor can execute and access data in native speeds and to which execution is transferred in a controlled manner that does not require hypervisor involvement. We have developed a prototype into KVM utilizing Intel VT hardware virtualization technology. We have also developed two representative applications for the Windows OS that monitor system calls and process creations. Our microbenchmarks show at least 10 times performance improvement in invocation of a monitor inside SIM over a monitor residing in another trusted VM. With a systematic security analysis of SIM against a number of possible threats, we show that SIM provides at least the same security guarantees as what can be achieved by out-of-VM monitors.",
"title": ""
},
{
"docid": "a2346bc58039ef6f5eb710804e87359d",
"text": "This work presents a deep object co-segmentation (DOCS) approach for segmenting common objects of the same class within a pair of images. This means that the method learns to ignore common, or uncommon, background stuff and focuses on common objects. If multiple object classes are presented in the image pair, they are jointly extracted as foreground. To address this task, we propose a CNN-based Siamese encoder-decoder architecture. The encoder extracts high-level semantic features of the foreground objects, a mutual correlation layer detects the common objects, and finally, the decoder generates the output foreground masks for each image. To train our model, we compile a large object co-segmentation dataset consisting of image pairs from the PASCAL dataset with common objects masks. We evaluate our approach on commonly used datasets for co-segmentation tasks and observe that our approach consistently outperforms competing methods, for both seen and unseen object classes.",
"title": ""
},
{
"docid": "69c8584255b16e6bc05fdfc6510d0dc4",
"text": "OBJECTIVE\nThis study assesses the psychometric properties of Ward's seven-subtest short form (SF) for WAIS-IV in a sample of adults with schizophrenia (SZ) and schizoaffective disorder.\n\n\nMETHOD\nSeventy patients diagnosed with schizophrenia or schizoaffective disorder were administered the full version of the WAIS-IV. Four different versions of the Ward's SF were then calculated. The subtests used were: Similarities, Digit Span, Arithmetic, Information, Coding, Picture Completion, and Block Design (BD version) or Matrix Reasoning (MR version). Prorated and regression-based formulae were assessed for each version.\n\n\nRESULTS\nThe actual and estimated factorial indexes reflected the typical pattern observed in schizophrenia. The four SFs correlated significantly with their full-version counterparts, but the Perceptual Reasoning Index (PRI) correlated below the acceptance threshold for all four versions. The regression-derived estimates showed larger differences compared to the full form. The four forms revealed comparable but generally low clinical category agreement rates for factor indexes. All SFs showed an acceptable reliability, but they were not correlated with clinical outcomes.\n\n\nCONCLUSIONS\nThe WAIS-IV SF offers a good estimate of WAIS-IV intelligence quotient, which is consistent with previous results. Although the overall scores are comparable between the four versions, the prorated forms provided a better estimation of almost all indexes. MR can be used as an alternative for BD without substantially changing the psychometric properties of the SF. However, we recommend a cautious use of these abbreviated forms when it is necessary to estimate the factor index scores, especially PRI, and Processing Speed Index.",
"title": ""
},
{
"docid": "13d1b0637c12d617702b4f80fd7874ef",
"text": "Linear-time algorithms for testing the planarity of a graph are well known for over 35 years. However, these algorithms are quite involved and recent publications still try to give simpler linear-time tests. We give a conceptually simple reduction from planarity testing to the problem of computing a certain construction of a 3-connected graph. This implies a linear-time planarity test. Our approach is radically different from all previous linear-time planarity tests; as key concept, we maintain a planar embedding that is 3-connected at each point in time. The algorithm computes a planar embedding if the input graph is planar and a Kuratowski-subdivision otherwise.",
"title": ""
},
{
"docid": "36b232e486ee4c9885a51a1aefc8f12b",
"text": "Graphics processing units (GPUs) are a powerful platform for building high-speed network traffic processing applications using low-cost hardware. Existing systems tap the massively parallel architecture of GPUs to speed up certain computationally intensive tasks, such as cryptographic operations and pattern matching. However, they still suffer from significant overheads due to criticalpath operations that are still being carried out on the CPU, and redundant inter-device data transfers. In this paper we present GASPP, a programmable network traffic processing framework tailored to modern graphics processors. GASPP integrates optimized GPUbased implementations of a broad range of operations commonly used in network traffic processing applications, including the first purely GPU-based implementation of network flow tracking and TCP stream reassembly. GASPP also employs novel mechanisms for tackling control flow irregularities across SIMT threads, and sharing memory context between the network interface and the GPU. Our evaluation shows that GASPP can achieve multi-gigabit traffic forwarding rates even for computationally intensive and complex network operations such as stateful traffic classification, intrusion detection, and packet encryption. Especially when consolidating multiple network applications on the same device, GASPP achieves up to 16.2× speedup compared to standalone GPU-based implementations of the same applications.",
"title": ""
},
{
"docid": "67265d70b2d704c0ab2898c933776dc2",
"text": "The intima-media thickness (IMT) of the common carotid artery (CCA) is widely used as an early indicator of cardiovascular disease (CVD). Typically, the IMT grows with age and this is used as a sign of increased risk of CVD. Beyond thickness, there is also clinical interest in identifying how the composition and texture of the intima-media complex (IMC) changed and how these textural changes grow into atherosclerotic plaques that can cause stroke. Clearly though texture analysis of ultrasound images can be greatly affected by speckle noise, our goal here is to develop effective despeckle noise methods that can recover image texture associated with increased rates of atherosclerosis disease. In this study, we perform a comparative evaluation of several despeckle filtering methods, on 100 ultrasound images of the CCA, based on the extracted multiscale Amplitude-Modulation Frequency-Modulation (AM-FM) texture features and visual image quality assessment by two clinical experts. Texture features were extracted from the automatically segmented IMC for three different age groups. The despeckle filters hybrid median and the homogeneous mask area filter showed the best performance by improving the class separation between the three age groups and also yielded significantly improved image quality.",
"title": ""
},
{
"docid": "a6f3be6fa5459a927fdbc455a4a081e2",
"text": "Crowdsourcing, simply referring to the act of outsourcing a task to the crowd, is one of the most important trends revolutionizing the internet and the mobile market at present. This paper is an attempt to understand the dynamic and innovative discipline of crowdsourcing by developing a critical success factor model for it. The critical success factor model is based on the case study analysis of the mobile phone based crowdsourcing initiatives in Africa and the available literature on outsourcing, crowdsourcing and technology adoption. The model is used to analyze and hint at some of the critical attributes of a successful crowdsourcing initiative focused on socio-economic development of societies. The broader aim of the paper is to provide academicians, social entrepreneurs, policy makers and other practitioners with a set of recommended actions and an overview of the important considerations to be kept in mind while implementing a crowdsourcing initiative.",
"title": ""
},
{
"docid": "3ef36b8675faf131da6cbc4d94f0067e",
"text": "The staggering amount of streaming time series coming from the real world calls for more efficient and effective online modeling solution. For time series modeling, most existing works make some unrealistic assumptions such as the input data is of fixed length or well aligned, which requires extra effort on segmentation or normalization of the raw streaming data. Although some literature claim their approaches to be invariant to data length and misalignment, they are too time-consuming to model a streaming time series in an online manner. We propose a novel and more practical online modeling and classification scheme, DDE-MGM, which does not make any assumptions on the time series while maintaining high efficiency and state-of-the-art performance. The derivative delay embedding (DDE) is developed to incrementally transform time series to the embedding space, where the intrinsic characteristics of data is preserved as recursive patterns regardless of the stream length and misalignment. Then, a non-parametric Markov geographic model (MGM) is proposed to both model and classify the pattern in an online manner. Experimental results demonstrate the effectiveness and superior classification accuracy of the proposed DDE-MGM in an online setting as compared to the state-of-the-art.",
"title": ""
},
{
"docid": "dbcfb877dae759f9ad1e451998d8df38",
"text": "Detection and tracking of humans in video streams is important for many applications. We present an approach to automatically detect and track multiple, possibly partially occluded humans in a walking or standing pose from a single camera, which may be stationary or moving. A human body is represented as an assembly of body parts. Part detectors are learned by boosting a number of weak classifiers which are based on edgelet features. Responses of part detectors are combined to form a joint likelihood model that includes an analysis of possible occlusions. The combined detection responses and the part detection responses provide the observations used for tracking. Trajectory initialization and termination are both automatic and rely on the confidences computed from the detection responses. An object is tracked by data association and meanshift methods. Our system can track humans with both inter-object and scene occlusions with static or non-static backgrounds. Evaluation results on a number of images and videos and comparisons with some previous methods are given.",
"title": ""
},
{
"docid": "9bf99d48bc201147a9a9ad5af547a002",
"text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.",
"title": ""
},
{
"docid": "e982954841e753aa0dd4f66fe2eb4f7a",
"text": "Background. Observational studies suggest that people who consume more fruits and vegetables containing beta carotene have somewhat lower risks of cancer and cardiovascular disease, and earlier basic research suggested plausible mechanisms. Because large randomized trials of long duration were necessary to test this hypothesis directly, we conducted a trial of beta carotene supplementation. Methods. In a randomized, double-blind, placebo-controlled trial of beta carotene (50 mg on alternate days), we enrolled 22,071 male physicians, 40 to 84 years of age, in the United States; 11 percent were current smokers and 39 percent were former smokers at the beginning of the study in 1982. By December 31, 1995, the scheduled end of the study, fewer than 1 percent had been lost to followup, and compliance was 78 percent in the group that received beta carotene. Results. Among 11,036 physicians randomly assigned to receive beta carotene and 11,035 assigned to receive placebo, there were virtually no early or late differences in the overall incidence of malignant neoplasms or cardiovascular disease, or in overall mortality. In the beta carotene group, 1273 men had any malignant neoplasm (except nonmelanoma skin cancer), as compared with 1293 in the placebo group (relative risk, 0.98; 95 percent confidence interval, 0.91 to 1.06). There were also no significant differences in the number of cases of lung cancer (82 in the beta carotene group vs. 88 in the placebo group); the number of deaths from cancer (386 vs. 380), deaths from any cause (979 vs. 968), or deaths from cardiovascular disease (338 vs. 313); the number of men with myocardial infarction (468 vs. 489); the number with stroke (367 vs. 382); or the number with any one of the previous three end points (967 vs. 972). Among current and former smokers, there were also no significant early or late differences in any of these end points. Conclusions. In this trial among healthy men, 12 years of supplementation with beta carotene produced neither benefit nor harm in terms of the incidence of malignant neoplasms, cardiovascular disease, or death from all causes. (N Engl J Med 1996;334:1145-9.) 1996, Massachusetts Medical Society. From the Divisions of Preventive Medicine (C.H.H., J.E.B., J.E.M., N.R.C., C.B., F.L., J.M.G., P.M.R.) and Cardiovascular Medicine (J.M.G., P.M.R.) and the Channing Laboratory (M.S., B.R., W.W.), Department of Medicine, Brigham and Women’s Hospital; the Department of Ambulatory Care and Prevention, Harvard Medical School (C.H.H., J.E.B., N.R.C.); and the Departments of Epidemiology (C.H.H., J.E.B., M.S., W.W.), Biostatistics (B.R.), and Nutrition (M.S., W.W.), Harvard School of Public Health — all in Boston; and the Imperial Cancer Research Fund Clinical Trial Service Unit, University of Oxford, Oxford, England (R.P.). Address reprint requests to Dr. Hennekens at 900 Commonwealth Ave. E., Boston, MA 02215. Supported by grants (CA-34944, CA-40360, HL-26490, and HL-34595) from the National Institutes of Health. O BSERVATIONAL epidemiologic studies suggest that people who consume higher dietary levels of fruits and vegetables containing beta carotene have a lower risk of certain types of cancer 1,2 and cardiovascular disease, 3 and basic research suggests plausible mechanisms. 4-6 It is difficult to determine from observational studies, however, whether the apparent benefits are due to beta carotene itself, other nutrients in beta carotene– rich foods, other dietary habits, or other, nondietary lifestyle characteristics. 7 Long-term, large, randomized trials can provide a direct test of the efficacy of beta carotene in the prevention of cancer or cardiovascular disease. 8 For cancer, such trials should ideally last longer than the latency period (at least 5 to 10 years) of many types of cancer. A trial lasting 10 or more years could allow a sufficient period of latency and an adequate number of cancers for the detection of even a small reduction in overall risk due to supplementation with beta carotene. Two large, randomized, placebo-controlled trials in well-nourished populations (primarily cigarette smokers) have been reported. The Alpha-Tocopherol, Beta Carotene (ATBC) Cancer Prevention Study, a placebocontrolled trial, assigned 29,000 Finnish male smokers to receive beta carotene, vitamin E, both active agents, or neither, for an average of six years. 9 The BetaCarotene and Retinol Efficacy Trial (CARET) enrolled 18,000 men and women at high risk for lung cancer because of a history of cigarette smoking or occupational exposure to asbestos; this trial evaluated combined treatment with beta carotene and retinol for an average of less than four years. 10 Both studies found no benefits of such supplementation in terms of the incidence of Downloaded from www.nejm.org at UW MADISON on December 04, 2003. Copyright © 1996 Massachusetts Medical Society. All rights reserved. 1146 THE NEW ENGLAND JOURNAL OF MEDICINE May 2, 1996 cancer or cardiovascular disease; indeed, both found somewhat higher rates of lung cancer and cardiovascular disease among subjects given beta carotene. The estimated excess risks were small, and it remains unclear whether beta carotene was truly harmful. Moreover, since the duration of these studies was relatively short, they leave open the possibility that benefit, especially in terms of cancer, would become evident with longer treatment and follow-up. 11 In this report, we describe the findings of the beta carotene component of the Physicians’ Health Study, a randomized trial in which 22,071 U.S. male physicians were treated and followed for an average of 12 years.",
"title": ""
},
{
"docid": "1d44e13375e1b647fed4dbf661d80ec4",
"text": "Designing and implementing efficient, provably correct parallel neural network processing is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. However, the diversity and large-scale data size have posed a significant challenge to construct a flexible and high-performance implementation of deep learning neural networks. To improve the performance and maintain the scalability, we present CNNLab, a novel deep learning framework using GPU and FPGA-based accelerators. CNNLab provides a uniform programming model to users so that the hardware implementation and the scheduling are invisible to the programmers. At runtime, CNNLab leverages the trade-offs between GPU and FPGA before offloading the tasks to the accelerators. Experimental results on the state-of-the-art Nvidia K40 GPU and Altera DE5 FPGA board demonstrate that the CNNLab can provide a universal framework with efficient support for diverse applications without increasing the burden of the programmers. Moreover, we analyze the detailed quantitative performance, throughput, power, energy, and performance density for both approaches. Experimental results leverage the trade-offs between GPU and FPGA and provide useful practical experiences for the deep learning research community.",
"title": ""
},
{
"docid": "2710b8b13436aae826f89f9b48fc02bd",
"text": "The Winograd Schema Challenge is an alternative to the Turing Test that may provide a more meaningful measure of machine intelligence. It poses a set of coreference resolution problems that cannot be solved without human-like reasoning. In this paper, we take the view that the solution to such problems lies in establishing discourse coherence. Specifically, we examine two types of rhetorical relations that can be used to establish discourse coherence: positive and negative correlation. We introduce a framework for reasoning about correlation between sentences, and show how this framework can be used to justify solutions to some Winograd Schema problems.",
"title": ""
},
{
"docid": "db1c084ddbe345fe3c8e400e295830c8",
"text": "This article is a single-source introduction to the emerging concept of smart cities. It can be used for familiarizing researchers with the vast scope of research possible in this application domain. The smart city is primarily a concept, and there is still not a clear and consistent definition among practitioners and academia. As a simplistic explanation, a smart city is a place where traditional networks and services are made more flexible, efficient, and sustainable with the use of information, digital, and telecommunication technologies to improve the city's operations for the benefit of its inhabitants. Smart cities are greener, safer, faster, and friendlier. The different components of a smart city include smart infrastructure, smart transportation, smart energy, smart health care, and smart technology. These components are what make the cities smart and efficient. Information and communication technology (ICT) are enabling keys for transforming traditional cities into smart cities. Two closely related emerging technology frameworks, the Internet of Things (IoT) and big data (BD), make smart cities efficient and responsive. The technology has matured enough to allow smart cities to emerge. However, there is much needed in terms of physical infrastructure, a smart city, the digital technologies translate into better public services for inhabitants and better use of resources while reducing environmental impacts. One of the formal definitions of the smart city is the following: a city \"connecting the physical infrastructure, the information-technology infrastructure, the social infrastructure, and the business infrastructure to leverage the collective intelligence of the city\". Another formal and comprehensive definition is \"a smart sustainable city is an innovative city that uses information and communication technologies (ICTs) and other means to improve quality of life, efficiency of urban operations and services, and competitiveness, while ensuring that it meets the needs of present and future generations with respect to economic, social and environmental aspects\". Any combination of various smart components can make cities smart. A city need not have all the components to be labeled as smart. The number of smart components depends on the cost and available technology.",
"title": ""
},
{
"docid": "f7e5c139bc044683bd28840434212cf7",
"text": "Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system’s components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks’ links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures.",
"title": ""
}
] |
scidocsrr
|
a3634f2ed342bd475ab9de99c918259a
|
Neural modeling for time series: A statistical stepwise method for weight elimination
|
[
{
"docid": "f35f7aab4bf63527abbc3d7f4515b6d2",
"text": "The elements of the Hessian matrix consist of the second derivatives of the error measure with respect to the weights and thresholds in the network. They are needed in Bayesian estimation of network regularization parameters, for estimation of error bars on the network outputs, for network pruning algorithms, and for fast retraining of the network following a small change in the training data. In this paper we present an extended backpropagation algorithm that allows all elements of the Hessian matrix to be evaluated exactly for a feedforward network of arbitrary topology. Software implementation of the algorithm is straightforward.",
"title": ""
}
] |
[
{
"docid": "9e90012f70d1671edf1f2ef9f36fb08f",
"text": "In this paper, a new complementary gate driver for power metal-oxide semiconductor field-effect transistors and insulated gate bipolar transistors is presented based on the use of a piezoelectric transformer (PT). This type of transformer has a high integration capability. Its design is based on a multilayer structure working in the second thickness resonance mode. A new design method has been used based on an analytical Mason model in order to optimize the efficiency, the available power at the transformer secondary ends, and the total volume. This design method takes into account mechanical losses and heating of the piezoelectric material; it can be extended to predict the characteristics of the PT: gain, transmitted power, efficiency, and heating of piezoelectric materials according to load resistance. A prototype of a PT rated for an inverter-leg gate driver was fabricated and tested experimentally. All calculated characteristics have been confirmed by measurements. Satisfactory results have been obtained in driving a 10-A/300-V/10-kHz chopper. Moreover, a study has been carried out about the propagation of common mode currents between the top-switch and the bottom-switch of the inverter leg throughout the driver in order to avoid cross-talking failures.",
"title": ""
},
{
"docid": "1e0ddc413489d21c8580ec2ecc6ac69e",
"text": "We present several interrelated technical and empirical contributions to the problem of emotion-based music recommendation and show how they can be applied in a possible usage scenario. The contributions are (1) a new three-dimensional resonance-arousal-valence model for the representation of emotion expressed in music, together with methods for automatically classifying a piece of music in terms of this model, using robust regression methods applied to musical/acoustic features; (2) methods for predicting a listener’s emotional state on the assumption that the emotional state has been determined entirely by a sequence of pieces of music recently listened to, using conditional random fields and taking into account the decay of emotion intensity over time; and (3) a method for selecting a ranked list of pieces of music that match a particular emotional state, using a minimization iteration method. A series of experiments yield information about the validity of our operationalizations of these contributions. Throughout the article, we refer to an illustrative usage scenario in which all of these contributions can be exploited, where it is assumed that (1) a listener’s emotional state is being determined entirely by the music that he or she has been listening to and (2) the listener wants to hear additional music that matches his or her current emotional state. The contributions are intended to be useful in a variety of other scenarios as well.",
"title": ""
},
{
"docid": "013937d11822d5c8c4b24b7e1f15792f",
"text": "BACKGROUND\nClinical information is often stored as free text, e.g. in discharge summaries or pathology reports. These documents are semi-structured using section headers, numbered lists, items and classification strings. However, it is still challenging to retrieve relevant documents since keyword searches applied on complete unstructured documents result in many false positive retrieval results.\n\n\nOBJECTIVES\nWe are concentrating on the processing of pathology reports as an example for unstructured clinical documents. The objective is to transform reports semi-automatically into an information structure that enables an improved access and retrieval of relevant data. The data is expected to be stored in a standardized, structured way to make it accessible for queries that are applied to specific sections of a document (section-sensitive queries) and for information reuse.\n\n\nMETHODS\nOur processing pipeline comprises information modelling, section boundary detection and section-sensitive queries. For enabling a focused search in unstructured data, documents are automatically structured and transformed into a patient information model specified through openEHR archetypes. The resulting XML-based pathology electronic health records (PEHRs) are queried by XQuery and visualized by XSLT in HTML.\n\n\nRESULTS\nPathology reports (PRs) can be reliably structured into sections by a keyword-based approach. The information modelling using openEHR allows saving time in the modelling process since many archetypes can be reused. The resulting standardized, structured PEHRs allow accessing relevant data by retrieving data matching user queries.\n\n\nCONCLUSIONS\nMapping unstructured reports into a standardized information model is a practical solution for a better access to data. Archetype-based XML enables section-sensitive retrieval and visualisation by well-established XML techniques. Focussing the retrieval to particular sections has the potential of saving retrieval time and improving the accuracy of the retrieval.",
"title": ""
},
{
"docid": "d4fff9c75f3e8e699bbf5815b81e77b0",
"text": "We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human visual system to be more robust to nearly all of the tested image manipulations, and we observe progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker. Secondly, we show that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on, yet they display extremely poor generalisation abilities when tested on other distortion types. For example, training on salt-and-pepper noise does not imply robustness on uniform white noise and vice versa. Thus, changes in the noise distribution between training and testing constitutes a crucial challenge to deep learning vision systems that can be systematically addressed in a lifelong machine learning approach. Our new dataset consisting of 83K carefully measured human psychophysical trials provide a useful reference for lifelong robustness against image degradations set by the human visual system.",
"title": ""
},
{
"docid": "ab546c4ee80603a9b7edc3e69b07b7fd",
"text": "While there has been much successful work in developing rules to guide the design and implementation of interfaces for desktop machines and their applications, the design of mobile device interfaces is still relatively unexplored and unproven. This paper discusses the characteristics and limitations of current mobile device interfaces, especially compared to the desktop environment. Using existing interface guidelines as a starting point, a set of practical design guidelines for mobile device interfaces is proposed.",
"title": ""
},
{
"docid": "7530de11afdbb1e09c363644b0866bcb",
"text": "The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying cost and performance that depend on the degree of ordering desired. In particular, a protocol that enforces causal delivery orderings is introduced and shown to be a valuable alternative to conventional asynchronous communication protocols. The facility also ensures that the processes belonging to a fault-tolerant process group will observe consistent orderings of events affecting the group as a whole, including process failures, recoveries, migration, and dynamic changes to group properties like member rankings. A review of several uses for the protocols in the ISIS system, which supports fault-tolerant resilient objects and bulletin boards, illustrates the significant simplification of higher level algorithms made possible by our approach.",
"title": ""
},
{
"docid": "7d017a5a6116a08cc9009a2f009af120",
"text": "Route Designer, version 1.0, is a new retrosynthetic analysis package that generates complete synthetic routes for target molecules starting from readily available starting materials. Rules describing retrosynthetic transformations are automatically generated from reaction databases, which ensure that the rules can be easily updated to reflect the latest reaction literature. These rules are used to carry out an exhaustive retrosynthetic analysis of the target molecule, in which heuristics are used to mitigate the combinatorial explosion. Proposed routes are prioritized by an empirical rating algorithm to present a diverse profile of the most promising solutions. The program runs on a server with a web-based user interface. An overview of the system is presented together with examples that illustrate Route Designer's utility.",
"title": ""
},
{
"docid": "b1b2a83d67456c0f0bf54092cbb06e65",
"text": "The transmission of voice communications as datagram packets over IP networks, commonly known as voice-over-IP (VoIP) telephony, is rapidly gaining wide acceptance. With private phone conversations being conducted on insecure public networks, security of VoIP communications is increasingly important. We present a structured security analysis of the VoIP protocol stack, which consists of signaling (SIP), session description (SDP), key establishment (SDES, MIKEY, and ZRTP) and secure media transport (SRTP) protocols. Using a combination of manual and tool-supported formal analysis, we uncover several design flaws and attacks, most of which are caused by subtle inconsistencies between the assumptions that protocols at different layers of the VoIP stack make about each other. The most serious attack is a replay attack on SDES, which causes SRTP to repeat the keystream used for media encryption, thus completely breaking transport-layer security. We also demonstrate a man-in-the-middle attack on ZRTP, which allows the attacker to convince the communicating parties that they have lost their shared secret. If they are using VoIP devices without displays and thus cannot execute the \"human authentication\" procedure, they are forced to communicate insecurely, or not communicate at all, i.e., this becomes a denial of service attack. Finally, we show that the key derivation process used in MIKEY cannot be used to prove security of the derived key in the standard cryptographic model for secure key exchange.",
"title": ""
},
{
"docid": "f8683ec3d85fc268d1eae4b83ef7e1ab",
"text": "Natural graphs with skewed distributions raise unique challenges to distributed graph computation and partitioning. Existing graph-parallel systems usually use a “one-size-fits-all” design that uniformly processes all vertices, which either suffer from notable load imbalance and high contention for high-degree vertices (e.g., Pregel and GraphLab) or incur high communication cost and memory consumption even for low-degree vertices (e.g., PowerGraph and GraphX). In this article, we argue that skewed distributions in natural graphs also necessitate differentiated processing on high-degree and low-degree vertices. We then introduce PowerLyra, a new distributed graph processing system that embraces the best of both worlds of existing graph-parallel systems. Specifically, PowerLyra uses centralized computation for low-degree vertices to avoid frequent communications and distributes the computation for high-degree vertices to balance workloads. PowerLyra further provides an efficient hybrid graph partitioning algorithm (i.e., hybrid-cut) that combines edge-cut (for low-degree vertices) and vertex-cut (for high-degree vertices) with heuristics. To improve cache locality of inter-node graph accesses, PowerLyra further provides a locality-conscious data layout optimization. PowerLyra is implemented based on the latest GraphLab and can seamlessly support various graph algorithms running in both synchronous and asynchronous execution modes. A detailed evaluation on three clusters using various graph-analytics and MLDM (Machine Learning and Data Mining) applications shows that PowerLyra outperforms PowerGraph by up to 5.53X (from 1.24X) and 3.26X (from 1.49X) for real-world and synthetic graphs, respectively, and is much faster than other systems like GraphX and Giraph, yet with much less memory consumption. A porting of hybrid-cut to GraphX further confirms the efficiency and generality of PowerLyra.",
"title": ""
},
{
"docid": "7bd7b0b85ae68f0ccd82d597667d8acb",
"text": "Trust evaluation plays an important role in securing wireless sensor networks (WSNs), which is one of the most popular network technologies for the Internet of Things (IoT). The efficiency of the trust evaluation process is largely governed by the trust derivation, as it dominates the overhead in the process, and performance of WSNs is particularly sensitive to overhead due to the limited bandwidth and power. This paper proposes an energy-aware trust derivation scheme using game theoretic approach, which manages overhead while maintaining adequate security of WSNs. A risk strategy model is first presented to stimulate WSN nodes' cooperation. Then, a game theoretic approach is applied to the trust derivation process to reduce the overhead of the process. We show with the help of simulations that our trust derivation scheme can achieve both intended security and high efficiency suitable for WSN-based IoT networks.",
"title": ""
},
{
"docid": "074a82111bafbf73acd07418befc1237",
"text": "A novel method based on MLE–OED is proposed for unsupervised image segmentation of multiple objects with fuzzy edges. It adjusts the parameters of a mixture of Gaussian distributions via minimizing a new loss function. The loss function consists of two terms: a local content fitting term, which optimizes the entropy distribution, and a global statistical fitting term, which maximizes the likelihood of the parameters for the given data. The proposed segmentation method is validated by experiments on both synthetic and real images. The experimental results show that the proposed method outperformed two popular methods. 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a049749849761dc4cd65d4442fd135f8",
"text": "Local classifiers are sometimes called lazy learners because they do not train a classifier until presented with a test sample. However, such methods are generally not completely lazy because the neighborhood size k (or other locality parameter) is usually chosen by cross validation on the training set, which can require significant preprocessing and risks overfitting. We propose a simple alternative to cross validation of the neighborhood size that requires no preprocessing: instead of committing to one neighborhood size, average the discriminants for multiple neighborhoods. We show that this forms an expected estimated posterior that minimizes the expected Bregman loss with respect to the uncertainty about the neighborhood choice. We analyze this approach for six standard and state-of-the-art local classifiers, including discriminative adaptive metric kNN (DANN), a local support vector machine (SVM-KNN), hyperplane distance nearest neighbor (HKNN), and a new local Bayesian quadratic discriminant analysis (local BDA). The empirical effectiveness of this technique versus cross validation is confirmed with experiments on seven benchmark data sets, showing that similar classification performance can be attained without any training.",
"title": ""
},
{
"docid": "77a0234ae555075aebd10b0d9926484f",
"text": "The antibacterial effect of visible light irradiation combined with photosensitizers has been reported. The objective of this was to test the effect of visible light irradiation without photosensitizers on the viability of oral microorganisms. Strains of Porphyromonas gingivalis, Fusobacterium nucleatum, Streptococcus mutans and Streptococcus faecalis in suspension or grown on agar were exposed to visible light at wavelengths of 400-500 nm. These wavelengths are used to photopolymerize composite resins widely used for dental restoration. Three photocuring light sources, quartz-tungsten-halogen lamp, light-emitting diode and plasma-arc, at power densities between 260 and 1300 mW/cm2 were used for up to 3 min. Bacterial samples were also exposed to a near-infrared diode laser (wavelength, 830 nm), using identical irradiation parameters for comparison. The results show that blue light sources exert a phototoxic effect on P. gingivalis and F. nucleatum. The minimal inhibitory dose for P. gingivalis and F. nucleatum was 16-62 J/cm2, a value significantly lower than that for S. mutans and S. faecalis (159-212 J/cm2). Near-infrared diode laser irradiation did not affect any of the bacteria tested. Our results suggest that visible light sources without exogenous photosensitizers have a phototoxic effect mainly on Gram-negative periodontal pathogens.",
"title": ""
},
{
"docid": "b6b3a99fd1d12583159a6f01f9c85617",
"text": "Recommending proper food is of paramount importance for person's sound health. It is also useful in revenue generation in restaurants by recommending varied choices of food or recommending restaurants depending on the type of food to customers. In this paper, we survey the existing food recommendation engines and compare the different recommendation algorithms - Content based filtering, collaborative filtering, hybrid techniques etc. Further, we would see the challenges and limitations of each technique.",
"title": ""
},
{
"docid": "ed8ee467e7f40d6ba35cc6f8329ca681",
"text": "This paper proposes an architecture for Software Defined Optical Transport Networks. The SDN Controller includes a network abstraction layer allowing the implementation of cognitive controls and policies for autonomic operation, based on global network view. Additionally, the controller implements a virtualized GMPLS control plane, offloading and simplifying the network elements, while unlocking the implementation of new services such as optical VPNs, optical network slicing, and keeping standard OIF interfaces, such as UNI and NNI. The concepts have been implemented and validated in a real testbed network formed by five DWDM nodes equipped with flexgrid WSS ROADMs.",
"title": ""
},
{
"docid": "f8a6b721f99e54db0c4c81b9713aae78",
"text": "In this paper, a new bridgeless single-ended primary inductance converter power-factor-correction rectifier is introduced. The proposed circuit provides lower conduction losses with reduced components simultaneously. In conventional PFC converters (continuous-conduction-mode boost converter), a voltage loop and a current loop are required for PFC. In the proposed converter, the control circuit is simplified, and no current loop is required while the converter operates in discontinuous conduction mode. Theoretical analysis and simulation results are provided to explain circuit operation. A prototype of the proposed converter is realized, and the results are presented. The measured efficiency shows 1% improvement in comparison to conventional SEPIC rectifier.",
"title": ""
},
{
"docid": "184319fbdee41de23718bb0831c53472",
"text": "Localization is a prominent application and research area in Wireless Sensor Networks. Various research studies have been carried out on localization techniques and algorithms in order to improve localization accuracy. Received signal strength indicator is a parameter, which has been widely used in localization algorithms in many research studies. There are several environmental and other factors that affect the localization accuracy and reliability. This study introduces a new technique to increase the localization accuracy by employing a dynamic distance reference anchor method. In order to investigate the performance improvement obtained with the proposed technique, simulation models have been developed, and results have been analyzed. The simulation results show that considerable improvement in localization accuracy can be achieved with the proposed model.",
"title": ""
},
{
"docid": "3379acb763f587851e2218fca8084117",
"text": "Qualitative research includes a variety of methodological approacheswith different disciplinary origins and tools. This article discusses three commonly used approaches: grounded theory, mixed methods, and action research. It provides background for those who will encounter these methodologies in their reading rather than instructions for carrying out such research. We describe the appropriate uses, key characteristics, and features of rigour of each approach.",
"title": ""
},
{
"docid": "1d535722b21b32caf10e4c2fd1b0e267",
"text": "Cyber criminals are increasingly using robocalling, voice phishing and caller ID spoofing to craft attacks that are being used to scam unsuspecting users who have traditionally trusted the telephone. It is necessary to better understand telephony threats to effectively combat them. Although there exist crowd sourced complaint datasets about telephony abuse, such complaints are often filed after a user receives multiple calls over a period of time, and sometimes they lack important information. We believe honeypot technologies can be used to augment telephony abuse intelligence and improve its quality. However, a telephony honeypot presents several new challenges that do not arise in other traditional honeypot settings. We present Phoneypot, a first large scale telephony honeypot, that allowed us to explore ways to address these challenges. By presenting a concrete implementation of Phoneypot using a cloud infrastructure and 39,696 phone numbers (phoneytokens), we provide evidence of the benefits of telephony honeypots. Phoneypot received 1.3 million calls from 250K unique sources over a period of seven weeks. We detected several debt collectors and telemarketers calling patterns and an instance of a telephony denial-of-service attack. This provides us with new insights into telephony abuse and attack patterns.",
"title": ""
},
{
"docid": "a3fa64c1f6553a46cfd9f88e9a802bb2",
"text": "With the increasing use of liquid crystal-based displays in everyday life, led both by the development of new portable electronic devices and the desire to minimize the use of printed paper, Nematic Liquid Crystals [4] (NLCs) are now hugely important industrial materials; and research into ways to engineer more efficient display technologies is crucial. Modern electronic display technology mostly relies on the ability of NLC materials to rotate the plane of polarized light (birefringence). The degree to which they can do this depends on the orientation of the molecules within the liquid crystal, and this in turn is affected by factors such as an applied electric field (the molecules, which are typically long and thin, line up in an applied field), or by boundary effects (a phenomenon known as surface anchoring). Most devices currently available use the former effect: an electric field is applied to control the molecular orientation of a thin film of nematic liquid crystal between crossed polarizers (which are also the electrodes), and this in turn controls the optical effect when light passes through the layer (figure 1). The main disadvantage of this set-up is that the electric field must be applied constantly in order for the display to maintain its configuration – if the field is removed, the molecules of the NLC relax into the unique, stable, field-free state (giving no contrast between pixels, and a monochrome display). This is expensive in terms of power consumption, leading to generally short battery lifetimes. On the other hand, if one could somehow exploit the fact that the bounding surfaces of a cell affect the molecular configuration – the anchoring effect, which can, to a large extent, be controlled by mechanical or chemical treatments [1]– then one might be able to engineer a bistable system, with two (or more) stable field-free states, giving two optically-distinct stable steady states of the device, without any electric field required to sustain them. Power is required only to change the state of the cell from one steady state to the other (and this issue of “switchability”, which can be hard to achieve, is really the challenging part of the design). Such technology is particularly appropriate for LCDs that change only infrequently, e.g. “electronic paper” applications such as e-books, e-newspapers, and so on. Certain technologies for bistable devices already exist; and most use the surface anchoring effect, combined with a clever choice of bounding surface geometry. The goal of this project will be to investigate simpler designs for liquid crystal devices that exhibit bistability. With planar surface topography, but different anchoring conditions at the two bounding surfaces, bistability is possible [2,3]; and a device of this kind should be easier to manufacture. Two different modeling approaches can be taken depending on what design aspect is to be optimized. A simple approach is to study only steady states of the system. Such states will be governed by (nonlinear) ODEs, and stability can be investigated as the electric field strength is varied. In a system with several steady states, loss of stability of one state at a critical field would mean a bifurcation of the solution, and a switch to a different state. Such an analysis could give information about how to achieve switching at low critical fields, for example; or at physically-realistic material parameter values; but would say nothing about how fast the switching might be. Speed of switching would need to be investigated by studying a simple PDE model for the system. We can explore both approaches here, and attempt to come up with some kind of “optimal” design – whatever that means!",
"title": ""
}
] |
scidocsrr
|
efe6890e7d308875c177be396c3753e2
|
Motivation to learn: an overview of contemporary theories
|
[
{
"docid": "f1c00253a57236ead67b013e7ce94a5e",
"text": "A meta-analysis of 128 studies examined the effects of extrinsic rewards on intrinsic motivation. As predicted, engagement-contingent, completion-contingent, and performance-contingent rewards significantly undermined free-choice intrinsic motivation (d = -0.40, -0.36, and -0.28, respectively), as did all rewards, all tangible rewards, and all expected rewards. Engagement-contingent and completion-contingent rewards also significantly undermined self-reported interest (d = -0.15, and -0.17), as did all tangible rewards and all expected rewards. Positive feedback enhanced both free-choice behavior (d = 0.33) and self-reported interest (d = 0.31). Tangible rewards tended to be more detrimental for children than college students, and verbal rewards tended to be less enhancing for children than college students. The authors review 4 previous meta-analyses of this literature and detail how this study's methods, analyses, and results differed from the previous ones.",
"title": ""
}
] |
[
{
"docid": "c296244ea4283a43623d3a3aabd4d672",
"text": "With growing interest in Chinese Language Processing, numerous NLP tools (e.g., word segmenters, part-of-speech taggers, and parsers) for Chinese have been developed all over the world. However, since no large-scale bracketed corpora are available to the public, these tools are trained on corpora with different segmentation criteria, part-of-speech tagsets and bracketing guidelines, and therefore, comparisons are difficult. As a first step towards addressing this issue, we have been preparing a large bracketed corpus since late 1998. The first two installments of the corpus, 250 thousand words of data, fully segmented, POS-tagged and syntactically bracketed, have been released to the public via LDC (www.ldc.upenn.edu). In this paper, we discuss several Chinese linguistic issues and their implications for our treebanking efforts and how we address these issues when developing our annotation guidelines. We also describe our engineering strategies to improve speed while ensuring annotation quality.",
"title": ""
},
{
"docid": "1af6549bfd46ab084143e91078a04151",
"text": "The advances in 3D data acquisition techniques, graphics hardware, and 3D data modeling and visualizing techniques have led to the proliferation of 3D models. This has made the searching for specific 3D models a vital issue. Techniques for effective and efficient content-based retrieval of 3D models have therefore become an essential research topic. In this paper, a novel feature, called elevation descriptor, is proposed for 3D model retrieval. The elevation descriptor is invariant to translation and scaling of 3D models and it is robust for rotation. First, six elevations are obtained to describe the altitude information of a 3D model from six different views. Each elevation is represented by a gray-level image which is decomposed into several concentric circles. The elevation descriptor is obtained by taking the difference between the altitude sums of two successive concentric circles. An efficient similarity matching method is used to find the best match for an input model. Experimental results show that the proposed method is superior to other descriptors, including spherical harmonics, the MPEG-7 3D shape spectrum descriptor, and D2. 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f9232e4a2d18a4cf6858b5739434273f",
"text": "Face spoofing detection (i.e. face anti-spoofing) is emerging as a new research area and has already attracted a good number of works during the past five years. This paper addresses for the first time the key problem of the variation in the input image quality and resolution in face anti-spoofing. In contrast to most existing works aiming at extracting multiscale descriptors from the original face images, we derive a new multiscale space to represent the face images before texture feature extraction. The new multiscale space representation is derived through multiscale filtering. Three multiscale filtering methods are considered including Gaussian scale space, Difference of Gaussian scale space and Multiscale Retinex. Extensive experiments on three challenging and publicly available face anti-spoofing databases demonstrate the effectiveness of our proposed multiscale space representation in improving the performance of face spoofing detection based on gray-scale and color texture descriptors.",
"title": ""
},
{
"docid": "e2cd9538192d717a9eaef6344cf0371e",
"text": "Device-to-device (D2D) communication commonly refers to a type of technology that enable devices to communicate directly with each other without communication infrastructures such as access points (APs) or base stations (BSs). Bluetooth and WiFi-Direct are the two most popular D2D techniques, both working in the unlicensed industrial, scientific and medical (ISM) bands. Cellular networks, on the other hand, do not support direct over-the-air communications between users and devices. However, with the emergence of context-aware applications and the accelerating growth of Machine-to-Machine (M2M) applications, D2D communication plays an increasingly important role. It facilitates the discovery of geographically close devices, and enables direct communications between these proximate devices, which improves communication capability and reduces communication delay and power consumption. To embrace the emerging market that requires D2D communications, mobile operators and vendors are accepting D2D as a part of the fourth generation (4G) Long Term Evolution (LTE)-Advanced standard in 3rd Generation Partnership Project (3GPP) Release 12.",
"title": ""
},
{
"docid": "f1b1dc51cf7a6d8cb3b644931724cad6",
"text": "OBJECTIVE\nTo evaluate the curing profile of bulk-fill resin-based composites (RBC) using micro-Raman spectroscopy (μRaman).\n\n\nMETHODS\nFour bulk-fill RBCs were compared to a conventional RBC. RBC blocks were light-cured using a polywave LED light-curing unit. The 24-h degree of conversion (DC) was mapped along a longitudinal cross-section using μRaman. Curing profiles were constructed and 'effective' (>90% of maximum DC) curing parameters were calculated. A statistical linear mixed effects model was constructed to analyze the relative effect of the different curing parameters.\n\n\nRESULTS\nCuring efficiency differed widely with the flowable bulk-fill RBCs presenting a significantly larger 'effective' curing area than the fibre-reinforced RBC, which on its turn revealed a significantly larger 'effective' curing area than the full-depth bulk-fill and conventional (control) RBC. A decrease in 'effective' curing depth within the light beam was found in the same order. Only the flowable bulk-fill RBCs were able to cure 'effectively' at a 4-mm depth for the whole specimen width (up to 4mm outside the light beam). All curing parameters were found to statistically influence the statistical model and thus the curing profile, except for the beam inhomogeneity (regarding the position of the 410-nm versus that of 470-nm LEDs) that did not significantly affect the model for all RBCs tested.\n\n\nCONCLUSIONS\nMost of the bulk-fill RBCs could be cured up to at least a 4-mm depth, thereby validating the respective manufacturer's recommendations.\n\n\nCLINICAL SIGNIFICANCE\nAccording to the curing profiles, the orientation and position of the light guide is less critical for the bulk-fill RBCs than for the conventional RBC.",
"title": ""
},
{
"docid": "cb9a54b8eeb6ca14bdbdf8ee3faa8bdb",
"text": "The problem of auto-focusing has been studied for long, but most techniques found in literature do not always work well for low-contrast images. In this paper, a robust focus measure based on the energy of the image is proposed. It performs equally well on ordinary and low-contrast images. In addition, it is computationally efficient.",
"title": ""
},
{
"docid": "fd62cb306e6e39e7ead79696591746b2",
"text": "Many data mining techniques have been proposed for mining useful patterns in text documents. However, how to effectively use and update discovered patterns is still an open research issue, especially in the domain of text mining. Since most existing text mining methods adopted term-based approaches, they all suffer from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern (or phrase)-based approaches should perform better than the term-based ones, but many experiments do not support this hypothesis. This paper presents an innovative and effective pattern discovery technique which includes the processes of pattern deploying and pattern evolving, to improve the effectiveness of using and updating discovered patterns for finding relevant and interesting information. Substantial experiments on RCV1 data collection and TREC topics demonstrate that the proposed solution achieves encouraging performance.",
"title": ""
},
{
"docid": "83da776714bf49c3bbb64976d20e26a2",
"text": "Orthogonal frequency division multiplexing (OFDM) has been widely adopted in modern wireless communication systems due to its robustness against the frequency selectivity of wireless channels. For coherent detection, channel estimation is essential for receiver design. Channel estimation is also necessary for diversity combining or interference suppression where there are multiple receive antennas. In this paper, we will present a survey on channel estimation for OFDM. This survey will first review traditional channel estimation approaches based on channel frequency response (CFR). Parametric model (PM)-based channel estimation, which is particularly suitable for sparse channels, will be also investigated in this survey. Following the success of turbo codes and low-density parity check (LDPC) codes, iterative processing has been widely adopted in the design of receivers, and iterative channel estimation has received a lot of attention since that time. Iterative channel estimation will be emphasized in this survey as the emerging iterative receiver improves system performance significantly. The combination of multiple-input multiple-output (MIMO) and OFDM has been widely accepted in modern communication systems, and channel estimation in MIMO-OFDM systems will also be addressed in this survey. Open issues and future work are discussed at the end of this paper.",
"title": ""
},
{
"docid": "5701585d5692b4b28da3132f4094fc9f",
"text": "We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.",
"title": ""
},
{
"docid": "956690691cffe76be26bcbb45d88071c",
"text": "We analyze different strategies aimed at optimizing routing policies in the Internet. We first show that for a simple deterministic algorithm the local properties of the network deeply influence the time needed for packet delivery between two arbitrarily chosen nodes. We next rely on a real Internet map at the autonomous system level and introduce a score function that allows us to examine different routing protocols and their efficiency in traffic handling and packet delivery. Our results suggest that actual mechanisms are not the most efficient and that they can be integrated in a more general, though not too complex, scheme.",
"title": ""
},
{
"docid": "0c20d1fb99a0c52535dd712125b47dd9",
"text": "In this paper, we explore the problem of license plate recognition in-the-wild (in the meaning of capturing data in unconstrained conditions, taken from arbitrary viewpoints and distances). We propose a method for automatic license plate recognition in-the-wild based on a geometric alignment of license plates as a preceding step for holistic license plate recognition. The alignment is done by a Convolutional Neural Network that estimates control points for rectifying the image and the following rectification step is formulated so that the whole alignment and recognition process can be assembled into one computational graph of a contemporary neural network framework, such as Tensorflow. The experiments show that the use of the aligner helps the recognition considerably: the error rate dropped from 9.6 % to 2.1 % on real-life images of license plates. The experiments also show that the solution is fast - it is capable of real-time processing even on an embedded and low-power platform (Jetson TX2). We collected and annotated a dataset of license plates called CamCar6k, containing 6,064 images with annotated corner points and ground truth texts. We make this dataset publicly available.",
"title": ""
},
{
"docid": "d0985c38f3441ca0d69af8afaf67c998",
"text": "In this paper we discuss the importance of ambiguity, uncertainty and limited information on individuals’ decision making in situations that have an impact on their privacy. We present experimental evidence from a survey study that demonstrates the impact of framing a marketing offer on participants’ willingness to accept when the consequences of the offer are uncertain and highly ambiguous.",
"title": ""
},
{
"docid": "2d43992a8eb6e97be676c04fc9ebd8dd",
"text": "Social interactions and interpersonal communication has undergone significant changes in recent years. Increasing awareness of privacy issues and events such as the Snowden disclosures have led to the rapid growth of a new generation of anonymous social networks and messaging applications. By removing traditional concepts of strong identities and social links, these services encourage communication between strangers, and allow users to express themselves without fear of bullying or retaliation.\n Despite millions of users and billions of monthly page views, there is little empirical analysis of how services like Whisper have changed the shape and content of social interactions. In this paper, we present results of the first large-scale empirical study of an anonymous social network, using a complete 3-month trace of the Whisper network covering 24 million whispers written by more than 1 million unique users. We seek to understand how anonymity and the lack of social links affect user behavior. We analyze Whisper from a number of perspectives, including the structure of user interactions in the absence of persistent social links, user engagement and network stickiness over time, and content moderation in a network with minimal user accountability. Finally, we identify and test an attack that exposes Whisper users to detailed location tracking. We have notified Whisper and they have taken steps to address the problem.",
"title": ""
},
{
"docid": "999a1fbc3830ca0453760595046edb6f",
"text": "This paper introduces BoostMap, a method that can significantly reduce retrieval time in image and video database systems that employ computationally expensive distance measures, metric or non-metric. Database and query objects are embedded into a Euclidean space, in which similarities can be rapidly measured using a weighted Manhattan distance. Embedding construction is formulated as a machine learning task, where AdaBoost is used to combine many simple, ID embeddings into a multidimensional embedding that preserves a significant amount of the proximity structure in the original space. Performance is evaluated in a hand pose estimation system, and a dynamic gesture recognition system, where the proposed method is used to retrieve approximate nearest neighbors under expensive image and video similarity measures: In both systems, in quantitative experiments, BoostMap significantly increases efficiency, with minimal losses in accuracy. Moreover, the experiments indicate that BoostMap compares favorably with existing embedding methods that have been employed in computer vision and database applications, i.e., FastMap and Bourgain embeddings.",
"title": ""
},
{
"docid": "09b35c40a65a0c2c0f58deb49555000d",
"text": "There are a wide range of forensic and analysis tools to examine digital evidence in existence today. Traditional tool design examines each source of digital evidence as a BLOB (binary large object) and it is up to the examiner to identify the relevant items from evidence. In the face of rapid technological advancements we are increasingly confronted with a diverse set of digital evidence and being able to identify a particular tool for conducting a specific analysis is an essential task. In this paper, we present a systematic study of contemporary forensic and analysis tools using a hypothesis based review to identify the different functionalities supported by these tools. We highlight the limitations of the forensic tools in regards to evidence corroboration and develop a case for building evidence correlation functionalities into these tools.",
"title": ""
},
{
"docid": "533b8bf523a1fb69d67939607814dc9c",
"text": "Docker is an open platform for developers and system administrators to build, ship, and run distributed applications using Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows. The main advantage is that, Docker can get code tested and deployed into production as fast as possible. Different applications can be run over Docker containers with language independency. In this paper the performance of these Docker containers are evaluated based on their system performance. That is based on system resource utilization. Different benchmarking tools are used for this. Performance based on file system is evaluated using Bonnie++. Other system resources such as CPU utilization, memory utilization etc. are evaluated based on the benchmarking code (using psutil) developed using python. Detail results obtained from all these tests are also included in this paper. The results include CPU utilization, memory utilization, CPU count, CPU times, Disk partition, network I/O counter etc.",
"title": ""
},
{
"docid": "d8634beb04329e72e462df98d31b2003",
"text": "Link prediction is a key technique in many applications in social networks, where potential links between entities need to be predicted. Conventional link prediction techniques deal with either homogeneous entities, e.g., people to people, item to item links, or non-reciprocal relationships, e.g., people to item links. However, a challenging problem in link prediction is that of heterogeneous and reciprocal link prediction, such as accurate prediction of matches on an online dating site, jobs or workers on employment websites, where the links are reciprocally determined by both entities that heterogeneously belong to disjoint groups. The nature and causes of interactions in these domains makes heterogeneous and reciprocal link prediction significantly different from the conventional version of the problem. In this work, we address these issues by proposing a novel learnable framework called ReHeLP, which learns heterogeneous and reciprocal knowledge from collaborative information and demonstrate its impact on link prediction. Evaluation on a large commercial online dating dataset shows the success of the proposed method and its promise for link prediction.",
"title": ""
},
{
"docid": "2f90f1d9ffb03e54fe5a29c17c7ebe2b",
"text": "Exact matching of single patterns in DNA and amino acid sequences is studied. We performed an extensive experimental comparison of algorithms presented in the literature. In addition, we introduce new variations of earlier algorithms. The results of the comparison show that the new algorithms are efficient in practice.",
"title": ""
},
{
"docid": "19e3338e136197d9d8ab57225f762161",
"text": "We study the problem of combining multiple bandit algorithms (that is, online learning algorithms with partial feedback) with the goal of creating a master algorithm that performs almost as well as the best base algorithm if it were to be run on its own. The main challenge is that when run with a master, base algorithms unavoidably receive much less feedback and it is thus critical that the master not starve a base algorithm that might perform uncompetitively initially but would eventually outperform others if given enough feedback. We address this difficulty by devising a version of Online Mirror Descent with a special mirror map together with a sophisticated learning rate scheme. We show that this approach manages to achieve a more delicate balance between exploiting and exploring base algorithms than previous works yielding superior regret bounds. Our results are applicable to many settings, such as multi-armed bandits, contextual bandits, and convex bandits. As examples, we present two main applications. The first is to create an algorithm that enjoys worst-case robustness while at the same time performing much better when the environment is relatively easy. The second is to create an algorithm that works simultaneously under different assumptions of the environment, such as different priors or different loss structures.",
"title": ""
}
] |
scidocsrr
|
9c21fbfb5a6ad86da348d91889920b62
|
Dog breed classification via landmarks
|
[
{
"docid": "2cba0f9b3f4b227dfe0b40e3bebd99e4",
"text": "In this paper we propose a discriminant learning framework for problems in which data consist of linear subspaces instead of vectors. By treating subspaces as basic elements, we can make learning algorithms adapt naturally to the problems with linear invariant structures. We propose a unifying view on the subspace-based learning method by formulating the problems on the Grassmann manifold, which is the set of fixed-dimensional linear subspaces of a Euclidean space. Previous methods on the problem typically adopt an inconsistent strategy: feature extraction is performed in the Euclidean space while non-Euclidean distances are used. In our approach, we treat each sub-space as a point in the Grassmann space, and perform feature extraction and classification in the same space. We show feasibility of the approach by using the Grassmann kernel functions such as the Projection kernel and the Binet-Cauchy kernel. Experiments with real image databases show that the proposed method performs well compared with state-of-the-art algorithms.",
"title": ""
}
] |
[
{
"docid": "546f96600d90107ed8262ad04274b012",
"text": "Large-scale labeled training datasets have enabled deep neural networks to excel on a wide range of benchmark vision tasks. However, in many applications it is prohibitively expensive or timeconsuming to obtain large quantities of labeled data. To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled target domain. Unfortunately, direct transfer across domains often performs poorly due to domain shift and dataset bias. Domain adaptation is the machine learning paradigm that aims to learn a model from a source domain that can perform well on a different (but related) target domain. In this paper, we summarize and compare the latest unsupervised domain adaptation methods in computer vision applications. We classify the non-deep approaches into sample re-weighting and intermediate subspace transformation categories, while the deep strategy includes discrepancy-based methods, adversarial generative models, adversarial discriminative models and reconstruction-based methods. We also discuss some potential directions.",
"title": ""
},
{
"docid": "b1e93a62fd70c83ac9d007791c9ba86b",
"text": "The growth of abnormal cells in the brain is called as Brain Tumor which affects the normal functionalities of the brain. It is one among the major leading cause for the human death. The causes for the brain tumors are unpredictable, because there are more than 120 types of brain tumors. Earlier diagnosis can save the life of a person. National Brain Tumor Society has given a survey that 31.7% in male and 34.4% in female are affected by brain tumor. American brain tumor association has given a survey that, more or less 70,000 people will be diagnosed in this year. Therefore, many researchers have proposed new system for the diagnosis of the brain tumor. We analyzed those research and finally summarized this paper with some conclusion about the efficiently and accuracy of those proposed systems. Keywords— ―Image mining, Preprocessing, Feature extraction, Image classification, Segmentation‖",
"title": ""
},
{
"docid": "debb6ac09ab841987733ef83e4620d52",
"text": "One of the traditional problems in the walking and climbing robot moving in the 3D environment is how to negotiate the boundary of two plain surfaces such as corners, which may be convex or concave. In this paper a practical gait planning algorithm in the transition region of the boundary is proposed in terms of a geometrical view. The trajectory of the body is derived from the geometrical analysis of the relationship between the robot and the environment. And the position of each foot is determined by using parameters associated with the hip and the ankle of the robot. In each case of concave or convex boundaries, the trajectory that the robot moves along is determined in advance and the foot positions of the robot associated with the trajectory are computed, accordingly. The usefulness of the proposed method is confirmed through simulations and demonstrations with a walking and climbing robot.",
"title": ""
},
{
"docid": "c69e756c586a3a7e5032ebd988c36ecf",
"text": "Frame semantics is a linguistic theory that has been instantiated for English in the FrameNet lexicon. We solve the problem of frame-semantic parsing using a two-stage statistical model that takes lexical targets (i.e., content words and phrases) in their sentential contexts and predicts frame-semantic structures. Given a target in context, the first stage disambiguates it to a semantic frame. This model uses latent variables and semi-supervised learning to improve frame disambiguation for targets unseen at training time. The second stage finds the target's locally expressed semantic arguments. At inference time, a fast exact dual decomposition algorithm collectively predicts all the arguments of a frame at once in order to respect declaratively stated linguistic constraints, resulting in qualitatively better structures than naïve local predictors. Both components are feature-based and discriminatively trained on a small set of annotated frame-semantic parses. On the SemEval 2007 benchmark data set, the approach, along with a heuristic identifier of frame-evoking targets, outperforms the prior state of the art by significant margins. Additionally, we present experiments on the much larger FrameNet 1.5 data set. We have released our frame-semantic parser as open-source software.",
"title": ""
},
{
"docid": "30bc7923529eec5ac7d62f91de804f8e",
"text": "In this paper, we consider the scene parsing problem and propose a novel MultiPath Feedback recurrent neural network (MPF-RNN) for parsing scene images. MPF-RNN can enhance the capability of RNNs in modeling long-range context information at multiple levels and better distinguish pixels that are easy to confuse. Different from feedforward CNNs and RNNs with only single feedback, MPFRNN propagates the contextual features learned at top layer through weighted recurrent connections to multiple bottom layers to help them learn better features with such “hindsight”. For better training MPF-RNN, we propose a new strategy that considers accumulative loss at multiple recurrent steps to improve performance of the MPF-RNN on parsing small objects. With these two novel components, MPF-RNN has achieved significant improvement over strong baselines (VGG16 and Res101) on five challenging scene parsing benchmarks, including traditional SiftFlow, Barcelona, CamVid, Stanford Background as well as the recently released large-scale ADE20K.",
"title": ""
},
{
"docid": "0ccc233ea8225de88882883d678793c8",
"text": "Sustaining of Moore's Law over the next decade will require not only continued scaling of the physical dimensions of transistors but also performance improvement and aggressive reduction in power consumption. Heterojunction Tunnel FET (TFET) has emerged as promising transistor candidate for supply voltage scaling down to sub-0.5V due to the possibility of sub-kT/q switching without compromising on-current (ION). Recently, n-type III-V HTFET with reasonable on-current and sub-kT/q switching at supply voltage of 0.5V have been experimentally demonstrated. However, steep switching performance of III-V HTFET till date has been limited to range of drain current (IDS) spanning over less than a decade. In this work, we will present progress on complimentary Tunnel FETs and analyze primary roadblocks in the path towards achieving steep switching performance in III-V HTFET.",
"title": ""
},
{
"docid": "fa7c81c8d3d6574f1f1c905ad136f0ee",
"text": "The goal of this track is to study and evaluate the performance of 3D object retrieval algorithms using RGB-D data. This is inspired from the practical need to pair an object acquired from a consumer-grade depth camera to CAD models available in public datasets on the Internet. To support the study, we propose ObjectNN, a new dataset with well segmented and annotated RGB-D objects from SceneNN [HPN∗16] and CAD models from ShapeNet [CFG∗15]. The evaluation results show that the RGB-D to CAD retrieval problem, while being challenging to solve due to partial and noisy 3D reconstruction, can be addressed to a good extent using deep learning techniques, particularly, convolutional neural networks trained by multi-view and 3D geometry. The best method in this track scores 82% in accuracy.",
"title": ""
},
{
"docid": "619f38266a35e76a77fb4141879e1e68",
"text": "In article various approaches to measurement of efficiency of innovations and the problems arising at their measurement are considered, the system of an indistinct conclusion for the solution of a problem of obtaining recommendations about measurement of efficiency of innovations is offered.",
"title": ""
},
{
"docid": "25a94dbd1c02a6183df945d4684a0f31",
"text": "The success of applying policy gradient reinforcement learning (RL) to difficult control tasks hinges crucially on the ability to determine a sensible initialization for the policy. Transfer learning methods tackle this problem by reusing knowledge gleaned from solving other related tasks. In the case of multiple task domains, these algorithms require an inter-task mapping to facilitate knowledge transfer across domains. However, there are currently no general methods to learn an inter-task mapping without requiring either background knowledge that is not typically present in RL settings, or an expensive analysis of an exponential number of inter-task mappings in the size of the state and action spaces. This paper introduces an autonomous framework that uses unsupervised manifold alignment to learn intertask mappings and effectively transfer samples between different task domains. Empirical results on diverse dynamical systems, including an application to quadrotor control, demonstrate its effectiveness for cross-domain transfer in the context of policy gradient RL. Introduction Policy gradient reinforcement learning (RL) algorithms have been applied with considerable success to solve highdimensional control problems, such as those arising in robotic control and coordination (Peters & Schaal 2008). These algorithms use gradient ascent to tune the parameters of a policy to maximize its expected performance. Unfortunately, this gradient ascent procedure is prone to becoming trapped in local maxima, and thus it has been widely recognized that initializing the policy in a sensible manner is crucial for achieving optimal performance. For instance, one typical strategy is to initialize the policy using human demonstrations (Peters & Schaal 2006), which may be infeasible when the task cannot be easily solved by a human. This paper explores a different approach: instead of initializing the policy at random (i.e., tabula rasa) or via human demonstrations, we instead use transfer learning (TL) to initialize the policy for a new target domain based on knowledge from one or more source tasks. In RL transfer, the source and target tasks may differ in their formulations (Taylor & Stone 2009). In particular, Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. when the source and target tasks have different state and/or action spaces, an inter-task mapping (Taylor et al. 2007a) that describes the relationship between the two tasks is typically needed. This paper introduces a framework for autonomously learning an inter-task mapping for cross-domain transfer in policy gradient RL. First, we learn an inter-state mapping (i.e., a mapping between states in two tasks) using unsupervised manifold alignment. Manifold alignment provides a powerful and general framework that can discover a shared latent representation to capture intrinsic relations between different tasks, irrespective of their dimensionality. The alignment also yields an implicit inter-action mapping that is generated by mapping tracking states from the source to the target. Given the mapping between task domains, source task trajectories are then used to initialize a policy in the target task, significantly improving the speed of subsequent learning over an uninformed initialization. This paper provides the following contributions. First, we introduce a novel unsupervised method for learning interstate mappings using manifold alignment. Second, we show that the discovered subspace can be used to initialize the target policy. Third, our empirical validation conducted on four dissimilar and dynamically chaotic task domains (e.g., controlling a three-link cart-pole and a quadrotor aerial vehicle) shows that our approach can a) automatically learn an inter-state mapping across MDPs from the same domain, b) automatically learn an inter-state mapping across MDPs from very different domains, and c) transfer informative initial policies to achieve higher initial performance and reduce the time needed for convergence to near-optimal behavior.",
"title": ""
},
{
"docid": "ddd1e06761a476dc02397a4381fbe8f8",
"text": "The potential for physical activity and fitness to improve cognitive function, learning and academic achievement in children has received attention by researchers and policy makers. This paper reports a systematic approach to identification, analysis and review of published studies up to early 2009. A threestep search method was adopted to identify studies that used measures of physical activity or fitness to assess either degree of association with or effect on a) academic achievement and b) cognitive performance. A total of 18 studies including one randomised control trial, six quasi-experimental and 11 correlational studies were included for data extraction. No studies meeting criteria that examined the links between physical activity and cognitive function were found. Weak positive associations were found between both physical activity and fitness and academic achievement and fitness and elements of cognitive function, but this was not supported by intervention studies. There is insufficient evidence to conclude that additional physical education time increases academic achievement; however there is no evidence that it is detrimental. The quality and depth of the evidence base is limited. Further research with rigour beyond correlational studies is essential.",
"title": ""
},
{
"docid": "6708846369ea2f352ac8784c75e4652d",
"text": "This work presents simple and fast structured Bayesian learning for matrix and tensor factorization models. An unblocked Gibbs sampler is proposed for factorization machines (FM) which are a general class of latent variable models subsuming matrix, tensor and many other factorization models. We empirically show on the large Netflix challenge dataset that Bayesian FM are fast, scalable and more accurate than state-of-the-art factorization models.",
"title": ""
},
{
"docid": "3cd680dce4f05bd3137b7091397ba6be",
"text": "We are developing a natural language interface for human robot interaction that implements reasoning about deep semantics in natural language. To realize the required deep analysis, we employ methods from cognitive linguistics, namely the modular and compositional framework of Embodied Construction Grammar (ECG) [18]. Using ECG, robots are able to solve fine-grained reference resolution problems and other issues related to deep semantics and compositionality of natural language. This also includes verbal interaction with humans to clarify commands and queries that are too ambiguous to be executed safely. We implement our NLU framework as a ROS package and present proof-of-concept scenarios with different robots, as well as a survey on the state of the art in knowledge-based language HRI.",
"title": ""
},
{
"docid": "62218093e4d3bf81b23512043fc7a013",
"text": "The Internet of things (IoT) refers to every object, which is connected over a network with the ability to transfer data. Users perceive this interaction and connection as useful in their daily life. However any improperly designed and configured technology will exposed to security threats. Therefore an ecosystem for IoT should be designed with security embedded in each layer of its ecosystem. This paper will discussed the security threats to IoT and then proposed an IoT Security Framework to mitigate it. Then IoT Security Framework will be used to develop a Secure IoT Sensor to Cloud Ecosystem.",
"title": ""
},
{
"docid": "6883add239f58223ef1941d5044d4aa8",
"text": "A novel jitter equalization circuit is presented that addresses crosstalk-induced jitter in high-speed serial links. A simple model of electromagnetic coupling demonstrates the generation of crosstalk-induced jitter. The analysis highlights unique aspects of crosstalk-induced jitter that differ from far-end crosstalk. The model is used to predict the crosstalk-induced jitter in 2-PAM and 4-PAM, which is compared to measurement. Furthermore, the model suggests an equalizer that compensates for the data-induced electromagnetic coupling between adjacent links and is suitable for pre- or post-emphasis schemes. The circuits are implemented using 130-nm MOSFETs and operate at 5-10 Gb/s. The results demonstrate reduced deterministic jitter and lower bit-error rate (BER). At 10 Gb/s, the crosstalk-induced jitter equalizer opens the eye at 10/sup -12/ BER from 17 to 45 ps and lowers the rms jitter from 8.7 to 6.3 ps.",
"title": ""
},
{
"docid": "93cd4b68feab408177cdbfa2e4f2b217",
"text": "Wireless Capsule Endoscopy (WCE) is considered as a promising technology for non-invasive gastrointestinal disease examination. This paper studies the classification problem of the digestive organs for wireless capsule endoscopy (WCE) images aiming at saving the review time of doctors. Our previous study has proved the Convolutional Neural Networks (CNN)-based WCE classification system is able to achieve 95% classification accuracy in average, but it is difficult to further improve the classification accuracy owing to the variations of individuals and the complex digestive tract circumstance. Research shows that there are two possible approaches to improve classification accuracy: to extract more discriminative image features and to employ a more powerful classifier. In this paper, we propose to design a WCE classification system by a hybrid CNN with Extreme Learning Machine (ELM). In our approach, we construct the CNN as a data-driven feature extractor and the cascaded ELM as a strong classifier instead of the conventional used full-connection classifier in deep CNN classification system. Moreover, to improve the convergence and classification capability of ELM under supervision manner, a new initialization is employed. Our developed WCE image classification system is named as HCNN-NELM. With about 1 million real WCE images (25 examinations), intensive experiments are conducted to evaluate its performance. Results illustrate its superior performance compared to traditional classification methods and conventional CNN-based method, where about 97.25% classification accuracy can be achieved in average.",
"title": ""
},
{
"docid": "3faeedfe2473dc837ab0db9eb4aefc4b",
"text": "The spacing effect—that is, the benefit of spacing learning events apart rather than massing them together—has been demonstrated in hundreds of experiments, but is not well known to educators or learners. I investigated the spacing effect in the realistic context of flashcard use. Learners often divide flashcards into relatively small stacks, but compared to a large stack, small stacks decrease the spacing between study trials. In three experiments, participants used a web-based study programme to learn GRE-type word pairs. Studying one large stack of flashcards (i.e. spacing) was more effective than studying four smaller stacks of flashcards separately (i.e. massing). Spacing was also more effective than cramming—that is, massing study on the last day before the test. Across experiments, spacing was more effective than massing for 90% of the participants, yet after the first study session, 72% of the participants believed that massing had been more effective than spacing. Copyright # 2009 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "e9e2887e7aae5315a8661c9d7456aa2e",
"text": "It has been shown that learning distributed word representations is highly useful for Twitter sentiment classification. Most existing models rely on a single distributed representation for each word. This is problematic for sentiment classification because words are often polysemous and each word can contain different sentiment polarities under different topics. We address this issue by learning topic-enriched multi-prototype word embeddings (TMWE). In particular, we develop two neural networks which 1) learn word embeddings that better capture tweet context by incorporating topic information, and 2) learn topic-enriched multiple prototype embeddings for each word. Experiments on Twitter sentiment benchmark datasets in SemEval 2013 show that TMWE outperforms the top system with hand-crafted features, and the current best neural network model.",
"title": ""
},
{
"docid": "20bcf837048350386e091eb33ad130cc",
"text": "We describe a design pattern for writing programs that traverse data structures built from rich mutually-recursive data types. Such programs often have a great deal of \"boilerplate\" code that simply walks the structure, hiding a small amount of \"real\" code that constitutes the reason for the traversal.Our technique allows most of this boilerplate to be written once and for all, or even generated mechanically, leaving the programmer free to concentrate on the important part of the algorithm. These generic programs are much more adaptive when faced with data structure evolution because they contain many fewer lines of type-specific code.Our approach is simple to understand, reasonably efficient, and it handles all the data types found in conventional functional programming languages. It makes essential use of rank-2 polymorphism, an extension found in some implementations of Haskell. Further it relies on a simple type-safe cast operator.",
"title": ""
},
{
"docid": "cf0f63001493acd328a80c80430a5b44",
"text": "Random forest classification is a well known machine learning technique that generates classifiers in the form of an ensemble (\"forest\") of decision trees. The classification of an input sample is determined by the majority classification by the ensemble. Traditional random forest classifiers can be highly effective, but classification using a random forest is memory bound and not typically suitable for acceleration using FPGAs or GP-GPUs due to the need to traverse large, possibly irregular decision trees. Recent work at Lawrence Livermore National Laboratory has developed several variants of random forest classifiers, including the Compact Random Forest (CRF), that can generate decision trees more suitable for acceleration than traditional decision trees. Our paper compares and contrasts the effectiveness of FPGAs, GP-GPUs, and multi-core CPUs for accelerating classification using models generated by compact random forest machine learning classifiers. Taking advantage of training algorithms that can produce compact random forests composed of many, small trees rather than fewer, deep trees, we are able to regularize the forest such that the classification of any sample takes a deterministic amount of time. This optimization then allows us to execute the classifier in a pipelined or single-instruction multiple thread (SIMT) fashion. We show that FPGAs provide the highest performance solution, but require a multi-chip / multi-board system to execute even modest sized forests. GP-GPUs offer a more flexible solution with reasonably high performance that scales with forest size. Finally, multi-threading via Open MP on a shared memory system was the simplest solution and provided near linear performance that scaled with core count, but was still significantly slower than the GP-GPU and FPGA.",
"title": ""
},
{
"docid": "9b9a2a9695f90a6a9a0d800192dd76f6",
"text": "Due to high competition in today's business and the need for satisfactory communication with customers, companies understand the inevitable necessity to focus not only on preventing customer churn but also on predicting their needs and providing the best services for them. The purpose of this article is to predict future services needed by wireless users, with data mining techniques. For this purpose, the database of customers of an ISP in Shiraz, which logs the customer usage of wireless internet connections, is utilized. Since internet service has three main factors to define (Time, Speed, Traffics) we predict each separately. First, future service demand is predicted by implementing a simple Recency, Frequency, Monetary (RFM) as a basic model. Other factors such as duration from first use, slope of customer's usage curve, percentage of activation, Bytes In, Bytes Out and the number of retries to establish a connection and also customer lifetime value are considered and added to RFM model. Then each one of R, F, M criteria is alternately omitted and the result is evaluated. Assessment is done through analysis node which determines the accuracy of evaluated data among partitioned data. The result shows that CART and C5.0 are the best algorithms to predict future services in this case. As for the features, depending upon output of each features, duration and transfer Bytes are the most important after RFM. An ISP may use the model discussed in this article to meet customers' demands and ensure their loyalty and satisfaction.",
"title": ""
}
] |
scidocsrr
|
442388b36313ea8a321314c045649655
|
A machine learning approach for fingerprint based gender identification
|
[
{
"docid": "d529b4f1992f438bb3ce4373090f8540",
"text": "One conventional tool for interpolating surfaces over scattered data, the thin-plate spline, has an elegant algebra expressing the dependence of the physical bending energy of a thin metal plate on point constraints. For interpolation of a surface over a fixed set of nodes in the plane, the bending energy is a quadratic form in the heights assigned to the surface. The spline is the superposition of eigenvectors of the bending energy matrix, of successively larger physical scales, over a tilted flat plane having no bending energy at all. When these splines are paired, one representing the x-coordinate of another form and the other the y-coordinate, they aid greatly in the modeling of biological shape change as deformation. In this context, the pair becomes an interpolation map from RZ to R' relating two sets of landmark points. The spline maps decompose, in the same way as the spline surfaces, into a linear part (an affine transformation) together with the superposition of principal warps, which are geometrically independent, affine-free deformations of progressively smaller geometrical scales. The warps decompose an empirical deformation into orthogonal features more or less as a conventional orthogonal functional analysis decomposes the single scene. This paper demonstrates the decomposition of deformations by principal warps, extends the method to deal with curving edges between landmarks, relates this formalism to other applications of splines current in computer vision, and indicates how they might aid in the extraction of features for analysis, comparison, and diagnosis of biological and medical images.",
"title": ""
},
{
"docid": "fabc65effd31f3bb394406abfa215b3e",
"text": "Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. A more detailed overview of the theory (without proofs) can be found in Vapnik (1995). In Vapnik (1998) one can find detailed description of the theory (including proofs).",
"title": ""
}
] |
[
{
"docid": "0160d50bdd4bef68e4bc23e362283b0f",
"text": "Segmentation is a fundamental step in image description or classi1cation. In recent years, several computational models have been used to implement segmentation methods but without establishing a single analytic solution. However, the intrinsic properties of neural networks make them an interesting approach, despite some measure of ine5ciency. This paper presents a clustering approach for image segmentation based on a modi1ed fuzzy approach for image segmentation (ART) model. The goal of the proposed approach is to 1nd a simple model able to instance a prototype for each cluster avoiding complex post-processing phases. Results and comparisons with other similar models presented in the literature (like self-organizing maps and original fuzzy ART) are also discussed. Qualitative and quantitative evaluations con1rm the validity of the approach proposed. ? 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7d646fdb10b1ef9d332b6bb80bc40920",
"text": "Online financial textual information contains a large amount of investor sentiment, i.e. subjective assessment and discussion with respect to financial instruments. An effective solution to automate the sentiment analysis of such large amounts of online financial texts would be extremely beneficial. This paper presents a natural language processing (NLP) based pre-processing approach both for noise removal from raw online financial texts and for organizing such texts into an enhanced format that is more usable for feature extraction. The proposed approach integrates six NLP processing steps, including a developed syntactic and semantic combined negation handling algorithm, to reduce noise in the online informal text. Three-class sentiment classification is also introduced in each system implementation. Experimental results show that the proposed pre-processing approach outperforms other pre-processing methods. The combined negation handling algorithm is also evaluated against three standard negation handling approaches.",
"title": ""
},
{
"docid": "a23aa9d2a0a100e805e3c25399f4f361",
"text": "Cases of poisoning by oleander (Nerium oleander) were observed in several species, except in goats. This study aimed to evaluate the pathological effects of oleander in goats. The experimental design used three goats per group: the control group, which did not receive oleander and the experimental group, which received leaves of oleander (50 mg/kg/day) for six consecutive days. On the seventh day, goats received 110 mg/kg of oleander leaves four times at one-hourly interval. A last dose of 330 mg/kg of oleander leaves was given subsequently. After the last dose was administered, clinical signs such as apathy, colic, vocalizations, hyperpnea, polyuria, and moderate rumen distention were observed. Electrocardiogram revealed second-degree atrioventricular block. Death occurred on an average at 92 min after the last dosing. Microscopic evaluation revealed renal necrosis at convoluted and collector tubules and slight myocardial degeneration was observed by unequal staining of cardiomyocytes. Data suggest that goats appear to respond to oleander poisoning in a manner similar to other species.",
"title": ""
},
{
"docid": "9a283f62dad38887bc6779c3ea61979d",
"text": "Recent evidence supports that alterations in hepatocyte-derived exosomes (HDE) may play a role in the pathogenesis of drug-induced liver injury (DILI). HDE-based biomarkers also hold promise to improve the sensitivity of existing in vitro assays for predicting DILI liability. Primary human hepatocytes (PHH) provide a physiologically relevant in vitro model to explore the mechanistic and biomarker potential of HDE in DILI. However, optimal methods to study exosomes in this culture system have not been defined. Here we use HepG2 and HepaRG cells along with PHH to optimize methods for in vitro HDE research. We compared the quantity and purity of HDE enriched from HepG2 cell culture medium by 3 widely used methods: ultracentrifugation (UC), OptiPrep density gradient ultracentrifugation (ODG), and ExoQuick (EQ)-a commercially available exosome precipitation reagent. Although EQ resulted in the highest number of particles, UC resulted in more exosomes as indicated by the relative abundance of exosomal CD63 to cellular prohibitin-1 as well as the comparative absence of contaminating extravesicular material. To determine culture conditions that best supported exosome release, we also assessed the effect of Matrigel matrix overlay at concentrations ranging from 0 to 0.25 mg/ml in HepaRG cells and compared exosome release from fresh and cryopreserved PHH from same donor. Sandwich culture did not impair exosome release, and freshly prepared PHH yielded a higher number of HDE overall. Taken together, our data support the use of UC-based enrichment from fresh preparations of sandwich-cultured PHH for future studies of HDE in DILI.",
"title": ""
},
{
"docid": "3e3dc575858c21806edbe6149475f5e0",
"text": "This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments. Previous approaches have used models with fixed structure to infer the likelihood of a sequence of actions given the environment and the command. In contrast, our framework, called Generalized Grounding Graphs (G), dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command’s hierarchical and compositional semantic structure. Our system performs inference in the model to successfully find and execute plans corresponding to natural language commands such as “Put the tire pallet on the truck.” The model is trained using a corpus of commands collected using crowdsourcing. We pair each command with robot actions and use the corpus to learn the parameters of the model. We evaluate the robot’s performance by inferring plans from natural language commands, executing each plan in a realistic robot simulator, and asking users to evaluate the system’s performance. We demonstrate that our system can successfully follow many natural language commands from the corpus.",
"title": ""
},
{
"docid": "a7f9da2652de7f00a30ebbe59098ae80",
"text": "Wireless Sensor Networks (WSNs) are becoming increasingly popular since they can gather information from different locations without wires. This advantage is exploited in applications such as robotic systems, telecare, domotic or smart cities, among others. To gain independence from the electricity grid, WSNs devices are equipped with batteries, therefore their operational time is determined by the time that the batteries can power on the device. As a consequence, engineers must consider low energy consumption as a critical objective to design WSNs. Several approaches can be taken to make efficient use of energy in WSNs, for instance low-duty-cycling sensor networks (LDC-WSN). Based on the LDC-WSNs, we present LOKA, a LOw power Konsumption Algorithm to minimize WSNs energy consumption using different power modes in a sensor mote. The contribution of the work is a novel algorithm called LOKA that implements two duty-cycling mechanisms using the end-device of the ZigBee protocol (of the Application Support Sublayer) and an external microcontroller (Cortex M0+) in order to minimize the energy consumption of a delay tolerant networking. Experiments show that using LOKA, the energy required by the sensor device is reduced to half with respect to the same sensor device without using LOKA.",
"title": ""
},
{
"docid": "a31358ffda425f8e3f7fd15646d04417",
"text": "We elaborate the design and simulation of a planar antenna that is suitable for CubeSat picosatellites. The antenna operates at 436 MHz and its main features are miniature size and the built-in capability to produce circular polarization. The miniaturization procedure is given in detail, and the electrical performance of this small antenna is documented. Two main miniaturization techniques have been applied, i.e. dielectric loading and distortion of the current path. We have added an extra degree of freedom to the latter. The radiator is integrated with the chassis of the picosatellite and, at the same time, operates at the lower end of the UHF spectrum. In terms of electrical size, the structure presented herein is one of the smallest antennas that have been proposed for small satellites. Despite its small electrical size, the antenna maintains acceptable efficiency and gain performance in the band of interest.",
"title": ""
},
{
"docid": "b6707e5553e23e1a7786230217e81d6a",
"text": "Service robots have to robustly follow and interact with humans. In this paper, we propose a very fast multi-people tracking algorithm designed to be applied on mobile service robots. Our approach exploits RGB-D data and can run in real-time at very high frame rate on a standard laptop without the need for a GPU implementation. It also features a novel depthbased sub-clustering method which allows to detect people within groups or even standing near walls. Moreover, for limiting drifts and track ID switches, an online learning appearance classifier is proposed featuring a three-term joint likelihood. We compared the performances of our system with a number of state-of-the-art tracking algorithms on two public datasets acquired with three static Kinects and a moving stereo pair, respectively. In order to validate the 3D accuracy of our system, we created a new dataset in which RGB-D data are acquired by a moving robot. We made publicly available this dataset which is not only annotated by hand, but the ground-truth position of people and robot are acquired with a motion capture system in order to evaluate tracking accuracy and precision in 3D coordinates. Results of experiments on these datasets are presented, showing that, even without the need for a GPU, our approach achieves state-of-the-art accuracy and superior speed. Matteo Munaro Via Gradenigo 6A, 35131 Padova, Italy Tel.: +39-049-8277831 E-mail: [email protected] Emanuele Menegatti Via Gradenigo 6A, 35131 Padova, Italy Tel.: +39-049-8277651 E-mail: [email protected] (a) (b) Fig. 1 Example of our system output: (a) a 3D bounding box is drawn for every tracked person on the RGB image, (b) the corresponding 3D point cloud is reported, together with the estimated people trajectories.",
"title": ""
},
{
"docid": "fcb526dfd8f1d24b622995d4c0ff3e6c",
"text": "Scene flow is defined as the motion field in 3D space, and can be computed from a single view when using an RGBD sensor. We propose a new scene flow approach that exploits the local and piecewise rigidity of real world scenes. By modeling the motion as a field of twists, our method encourages piecewise smooth solutions of rigid body motions. We give a general formulation to solve for local and global rigid motions by jointly using intensity and depth data. In order to deal efficiently with a moving camera, we model the motion as a rigid component plus a non-rigid residual and propose an alternating solver. The evaluation demonstrates that the proposed method achieves the best results in the most commonly used scene flow benchmark. Through additional experiments we indicate the general applicability of our approach in a variety of different scenarios.",
"title": ""
},
{
"docid": "e6a92df6b717a55f86425b0164e9aa3a",
"text": "The COmpound Semiconductor Materials On Silicon (COSMOS) program of the U.S. Defense Advanced Research Projects Agency (DARPA) focuses on developing transistor-scale heterogeneous integration processes to intimately combine advanced compound semiconductor (CS) devices with high-density silicon circuits. The technical approaches being explored in this program include high-density micro assembly, monolithic epitaxial growth, and epitaxial layer printing processes. In Phase I of the program, performers successfully demonstrated world-record differential amplifiers through heterogeneous integration of InP HBTs with commercially fabricated CMOS circuits. In the current Phase II, complex wideband, large dynamic range, high-speed digital-to-analog convertors (DACs) are under development based on the above heterogeneous integration approaches. These DAC designs will utilize InP HBTs in the critical high-speed, high-voltage swing circuit blocks and will employ sophisticated in situ digital correction techniques enabled by CMOS transistors. This paper will also discuss the Phase III program plan as well as future directions for heterogeneous integration technology that will benefit mixed signal circuit applications.",
"title": ""
},
{
"docid": "d93609853422aed1c326d35ab820095d",
"text": "We present a method for inferring a 4D light field of a hidden scene from 2D shadows cast by a known occluder on a diffuse wall. We do this by determining how light naturally reflected off surfaces in the hidden scene interacts with the occluder. By modeling the light transport as a linear system, and incorporating prior knowledge about light field structures, we can invert the system to recover the hidden scene. We demonstrate results of our inference method across simulations and experiments with different types of occluders. For instance, using the shadow cast by a real house plant, we are able to recover low resolution light fields with different levels of texture and parallax complexity. We provide two experimental results: a human subject and two planar elements at different depths.",
"title": ""
},
{
"docid": "007e4cdac93554fc961f9aafd64aeab0",
"text": "Control system cyber security defense mechanisms may employ deception in human system interactions to make it more difficult for attackers to plan and execute successful attacks. These deceptive defense mechanisms are organized and initially explored according to a specific deception taxonomy and the seven abstract dimensions of security previously proposed as a framework for the cyber security of control systems.",
"title": ""
},
{
"docid": "db9ab8624cdf9b6fdfc91a5d72b76694",
"text": "In this paper, a low profile LLC resonant converter with two transformers using a planar core is proposed for a slim switching mode power supply (SMPS). Design procedures, magnetic modeling and voltage gain characteristics on the proposed planar transformer and converter are described in detail. LLC resonant converter including two transformers using a planar core is connected in series at primary and in parallel by the center-tap winding at secondary. Based on the theoretical analysis and simulation results of the voltage gain characteristics, a 300W LLC resonant converter is designed and tested.",
"title": ""
},
{
"docid": "e0d4ab67dc39967b7daa4dc438ef79f5",
"text": "Biclustering techniques have been widely used to identify homogeneous subgroups within large data matrices, such as subsets of genes similarly expressed across subsets of patients. Mining a max-sum sub-matrix is a related but distinct problem for which one looks for a (non-necessarily contiguous) rectangular sub-matrix with a maximal sum of its entries. Le Van et al. [6] already illustrated its applicability to gene expression analysis and addressed it with a constraint programming (CP) approach combined with large neighborhood search (CP-LNS). In this work, we exhibit some key properties of this NP-hard problem and define a bounding function such that larger problems can be solved in reasonable time. Two different algorithms are proposed in order to exploit the highlighted characteristics of the problem: a CP approach with a global constraint (CPGC) and mixed integer linear programming (MILP). Practical experiments conducted both on synthetic and real gene expression data exhibit the characteristics of these approaches and their relative benefits over the original CP-LNS method. Overall, the CPGC approach tends to be the fastest to produce a good solution. Yet, the MILP formulation is arguably the easiest to formulate and can also be competitive.",
"title": ""
},
{
"docid": "9ccbd750bd39e0451d98a7371c2b0914",
"text": "The aim of this study was to assess the effect of inspiratory muscle training (IMT) on resistance to fatigue of the diaphragm (D), parasternal (PS), sternocleidomastoid (SCM) and scalene (SC) muscles in healthy humans during exhaustive exercise. Daily inspiratory muscle strength training was performed for 3 weeks in 10 male subjects (at a pressure threshold load of 60% of maximal inspiratory pressure (MIP) for the first week, 70% of MIP for the second week, and 80% of MIP for the third week). Before and after training, subjects performed an incremental cycle test to exhaustion. Maximal inspiratory pressure and EMG-analysis served as indices of inspiratory muscle fatigue assessment. The before-to-after exercise decreases in MIP and centroid frequency (fc) of the EMG (D, PS, SCM, and SC) power spectrum (P<0.05) were observed in all subjects before the IMT intervention. Such changes were absent after the IMT. The study found that in healthy subjects, IMT results in significant increase in MIP (+18%), a delay of inspiratory muscle fatigue during exhaustive exercise, and a significant improvement in maximal work performance. We conclude that the IMT elicits resistance to the development of inspiratory muscles fatigue during high-intensity exercise.",
"title": ""
},
{
"docid": "8bae8e7937f4c9a492a7030c62d7d9f4",
"text": "Although there is considerable interest in the advance bookings model as a forecasting method in the hotel industry, there has been little research analyzing the use of an advance booking curve in forecasting hotel reservations. The mainstream of advance booking models reviewed in the literature uses only the bookings-on-hand data on a certain day and ignores the previous booking data. This empirical study analyzes the entire booking data set for one year provided by the Hotel ICON in Hong Kong, and identifies the trends and patterns in the data. The analysis demonstrates the use of an advance booking curve in forecasting hotel reservations at property level.",
"title": ""
},
{
"docid": "78f03adf9c114a8a720c9518b1cbf59e",
"text": "A crucial capability of autonomous road vehicles is the ability to cope with the unknown future behavior of surrounding traffic participants. This requires using non-deterministic models for prediction. While stochastic models are useful for long-term planning, we use set-valued non-determinism capturing all possible behaviors in order to verify the safety of planned maneuvers. To reduce the set of solutions, our earlier work considers traffic rules; however, it neglects mutual influences between traffic participants. This work presents the first solution for establishing interaction within set-based prediction of traffic participants. Instead of explicitly modeling dependencies between vehicles, we trim reachable occupancy regions to consider interaction, which is computationally much more efficient. The usefulness of our approach is demonstrated by experiments from the CommonRoad benchmark repository.",
"title": ""
},
{
"docid": "5aaba72970d1d055768e981f7e8e3684",
"text": "A hash table is a fundamental data structure in computer science that can offer rapid storage and retrieval of data. A leading implementation for string keys is the cacheconscious array hash table. Although fast with strings, there is currently no information in the research literatur e on its performance with integer keys. More importantly, we do not know how efficient an integer-based array hash table is compared to other hash tables that are designed for integers, such as bucketized cuckoo hashing. In this paper, we explain how to efficiently implement an array hash table for integers. We then demonstrate, through careful experimental evaluations, which hash table, whether it be a bucketized cuckoo hash table, an array hash table, or alternative hash table schemes such as linear probing, offers the best performance—with respect to time and space— for maintaining a large dictionary of integers in-memory, on a current cache-oriented processor.",
"title": ""
},
{
"docid": "88b562679f217affe489b6914bbc342b",
"text": "The measurement of functional gene abundance in diverse microbial communities often employs quantitative PCR (qPCR) with highly degenerate oligonucleotide primers. While degenerate PCR primers have been demonstrated to cause template-specific bias in PCR applications, the effect of such bias on qPCR has been less well explored. We used a set of diverse, full-length nifH gene standards to test the performance of several universal nifH primer sets in qPCR. We found significant template-specific bias in all but the PolF/PolR primer set. Template-specific bias caused more than 1000-fold mis-estimation of nifH gene copy number for three of the primer sets and one primer set resulted in more than 10,000-fold mis-estimation. Furthermore, such template-specific bias will cause qPCR estimates to vary in response to beta-diversity, thereby causing mis-estimation of changes in gene copy number. A reduction in bias was achieved by increasing the primer concentration. We conclude that degenerate primers should be evaluated across a range of templates, annealing temperatures, and primer concentrations to evaluate the potential for template-specific bias prior to their use in qPCR.",
"title": ""
},
{
"docid": "2f200468d1c8ddef1e1805cfb047b702",
"text": "BACKGROUND\nIn a previous trial of antiretroviral therapy (ART) involving pregnant women with human immunodeficiency virus (HIV) infection, those randomly assigned to receive tenofovir, emtricitabine, and ritonavir-boosted lopinavir (TDF-FTC-LPV/r) had infants at greater risk for very premature birth and death within 14 days after delivery than those assigned to receive zidovudine, lamivudine, and ritonavir-boosted lopinavir (ZDV-3TC-LPV/r).\n\n\nMETHODS\nUsing data from two U.S.-based cohort studies, we compared the risk of adverse birth outcomes among infants with in utero exposure to ZDV-3TC-LPV/r, TDF-FTC-LPV/r, or TDF-FTC with ritonavir-boosted atazanavir (ATV/r). We evaluated the risk of preterm birth (<37 completed weeks of gestation), very preterm birth (<34 completed weeks), low birth weight (<2500 g), and very low birth weight (<1500 g). Risk ratios with 95% confidence intervals were estimated with the use of modified Poisson models to adjust for confounding.\n\n\nRESULTS\nThere were 4646 birth outcomes. Few infants or fetuses were exposed to TDF-FTC-LPV/r (128 [2.8%]) as the initial ART regimen during gestation, in contrast with TDF-FTC-ATV/r (539 [11.6%]) and ZDV-3TC-LPV/r (954 [20.5%]). As compared with women receiving ZDV-3TC-LPV/r, women receiving TDF-FTC-LPV/r had a similar risk of preterm birth (risk ratio, 0.90; 95% confidence interval [CI], 0.60 to 1.33) and low birth weight (risk ratio, 1.13; 95% CI, 0.78 to 1.64). As compared to women receiving TDF-FTC-ATV/r, women receiving TDF-FTC-LPV/r had a similar or slightly higher risk of preterm birth (risk ratio, 1.14; 95% CI, 0.75 to 1.72) and low birth weight (risk ratio, 1.45; 95% CI, 0.96 to 2.17). There were no significant differences between regimens in the risk of very preterm birth or very low birth weight.\n\n\nCONCLUSIONS\nThe risk of adverse birth outcomes was not higher with TDF-FTC-LPV/r than with ZDV-3TC-LPV/r or TDF-FTC-ATV/r among HIV-infected women and their infants in the United States, although power was limited for some comparisons. (Funded by the National Institutes of Health and others.).",
"title": ""
}
] |
scidocsrr
|
1433d3c4edd9577aac0cee153d8cd1a7
|
Monkey Says, Monkey Does: Security and Privacy on Voice Assistants
|
[
{
"docid": "1a44645ee469e4bbaa978216d01f7e0d",
"text": "The growing popularity of mobile search and the advancement in voice recognition technologies have opened the door for web search users to speak their queries, rather than type them. While this kind of voice search is still in its infancy, it is gradually becoming more widespread. In this paper, we examine the logs of a commercial search engine's mobile interface, and compare the spoken queries to the typed-in queries. We place special emphasis on the semantic and syntactic characteristics of the two types of queries. %Our analysis suggests that voice queries focus more on audio-visual content and question answering, and less on social networking and adult domains. We also conduct an empirical evaluation showing that the language of voice queries is closer to natural language than typed queries. Our analysis reveals further differences between voice and text search, which have implications for the design of future voice-enabled search tools.",
"title": ""
},
{
"docid": "bb94ac9ac0c1e1f1155fc56b13bc103e",
"text": "In contrast to the Android application layer, Android’s application framework’s internals and their influence on the platform security and user privacy are still largely a black box for us. In this paper, we establish a static runtime model of the application framework in order to study its internals and provide the first high-level classification of the framework’s protected resources. We thereby uncover design patterns that differ highly from the runtime model at the application layer. We demonstrate the benefits of our insights for security-focused analysis of the framework by re-visiting the important use-case of mapping Android permissions to framework/SDK API methods. We, in particular, present a novel mapping based on our findings that significantly improves on prior results in this area that were established based on insufficient knowledge about the framework’s internals. Moreover, we introduce the concept of permission locality to show that although framework services follow the principle of separation of duty, the accompanying permission checks to guard sensitive operations violate it.",
"title": ""
},
{
"docid": "6f768934f02c0e559801a7b98d0fbbd7",
"text": "Voice-activated intelligent assistants, such as Siri, Google Now, and Cortana, are prevalent on mobile devices. However, it is challenging to evaluate them due to the varied and evolving number of tasks supported, e.g., voice command, web search, and chat. Since each task may have its own procedure and a unique form of correct answers, it is expensive to evaluate each task individually. This paper is the first attempt to solve this challenge. We develop consistent and automatic approaches that can evaluate different tasks in voice-activated intelligent assistants. We use implicit feedback from users to predict whether users are satisfied with the intelligent assistant as well as its components, i.e., speech recognition and intent classification. Using this approach, we can potentially evaluate and compare different tasks within and across intelligent assistants ac-cording to the predicted user satisfaction rates. Our approach is characterized by an automatic scheme of categorizing user-system interaction into task-independent dialog actions, e.g., the user is commanding, selecting, or confirming an action. We use the action sequence in a session to predict user satisfaction and the quality of speech recognition and intent classification. We also incorporate other features to further improve our approach, including features derived from previous work on web search satisfaction prediction, and those utilizing acoustic characteristics of voice requests. We evaluate our approach using data collected from a user study. Results show our approach can accurately identify satisfactory and unsatisfactory sessions.",
"title": ""
}
] |
[
{
"docid": "035341c7862f31eb6a4de0126ae569b5",
"text": "Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain.",
"title": ""
},
{
"docid": "aef051dd5cc521359f2f40b01ae80e35",
"text": "Despite the promising progress made in recent years, person re-identification (re-ID) remains a challenging task due to the complex variations in human appearances from different camera views. For this challenging problem, a large variety of algorithms have been developed in the fully supervised setting, requiring access to a large amount of labeled training data. However, the main bottleneck for fully supervised re-ID is the limited availability of labeled training samples. To address this problem, we propose a self-trained subspace learning paradigm for person re-ID that effectively utilizes both labeled and unlabeled data to learn a discriminative subspace where person images across disjoint camera views can be easily matched. The proposed approach first constructs pseudo-pairwise relationships among unlabeled persons using the k-nearest neighbors algorithm. Then, with the pseudo-pairwise relationships, the unlabeled samples can be easily combined with the labeled samples to learn a discriminative projection by solving an eigenvalue problem. In addition, we refine the pseudo-pairwise relationships iteratively, which further improves learning performance. A multi-kernel embedding strategy is also incorporated into the proposed approach to cope with the non-linearity in a person’s appearance and explore the complementation of multiple kernels. In this way, the performance of person re-ID can be greatly enhanced when training data are insufficient. Experimental results on six widely used datasets demonstrate the effectiveness of our approach, and its performance can be comparable to the reported results of most state-of-the-art fully supervised methods while using much fewer labeled data.",
"title": ""
},
{
"docid": "13aef8ba225dd15dd013e155c319310e",
"text": "ness and Approximations Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as followsness and Approximations • This rather absurd attack goes as follows Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: • Since Turing computers can’t be realized fully, Turing computation is now another “myth.” Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: • Since Turing computers can’t be realized fully, Turing computation is now another “myth.” • The problem is that Davis fails to recognize that a lot of th hypercomputational models are abstract models that no one hopes to build in the near future. Thursday, June 9, 2011 Necessity of Noncomputable Reals Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines • Zeus Machines Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines • Zeus Machines • Kieu-type Quantum Computation Thursday, June 9, 2011 Science-based Arguments: A Meta Analysis of Davis and friends Thursday, June 9, 2011 Science-based Arguments: A Meta Analysis of Davis and friends The Main Case Science of Sciences Part 1: Chain Store Paradox Part 2: Turing-level Actors Part 3:MDL Computational Learning Theory CLT-based Model of Science",
"title": ""
},
{
"docid": "d579ed125d3a051069b69f634fffe488",
"text": "Culture can be thought of as a set of everyday practices and a core theme-individualism, collectivism, or honor-as well as the capacity to understand each of these themes. In one's own culture, it is easy to fail to see that a cultural lens exists and instead to think that there is no lens at all, only reality. Hence, studying culture requires stepping out of it. There are two main methods to do so: The first involves using between-group comparisons to highlight differences and the second involves using experimental methods to test the consequences of disruption to implicit cultural frames. These methods highlight three ways that culture organizes experience: (a) It shields reflexive processing by making everyday life feel predictable, (b) it scaffolds which cognitive procedure (connect, separate, or order) will be the default in ambiguous situations, and (c) it facilitates situation-specific accessibility of alternate cognitive procedures. Modern societal social-demographic trends reduce predictability and increase collectivism and honor-based go-to cognitive procedures.",
"title": ""
},
{
"docid": "8f289714182c490b726b8edbbb672cfd",
"text": "Design and implementation of a 15kV sub-nanosecond pulse generator using Trigatron type spark gap as a switch. Straightforward and compact trigger generator using pulse shaping network which produces a trigger pulse of sub-nanosecond rise time. A pulse power system requires delivering a high voltage, high coulomb in short rise time. This is achieved by using pulse shaping network comprises of parallel combinations of capacitors and inductor. Spark gap switches are used to switch the energy from capacitive source to inductive load. The pulse hence generated can be used for synchronization of two or more spark gap. Because of the fast rise time and the high output voltage, the reliability of the synchronization is increased. The analytical calculations, simulation, have been carried out to select the circuit parameters. Simulation results using MATLAB/SIMULINK have been implemented in the experimental setup and sub-nanoseconds output waveforms have been obtained.",
"title": ""
},
{
"docid": "d0963f3416065ae6cc84b0205c06b4c6",
"text": "In this paper, we apply the concept of k-core on the graphof-words representation of text for single-document keyword extraction, retaining only the nodes from the main core as representative terms. This approach takes better into account proximity between keywords and variability in the number of extracted keywords through the selection of more cohesive subsets of nodes than with existing graphbased approaches solely based on centrality. Experiments on two standard datasets show statistically significant improvements in F1-score and AUC of precision/recall curve compared to baseline results, in particular when weighting the edges of the graph with the number of co-occurrences. To the best of our knowledge, this is the first application of graph degeneracy to natural language processing and information retrieval.",
"title": ""
},
{
"docid": "baa0bf8fe429c4fe8bfb7ebf78a1ed94",
"text": "The weakly supervised object localization (WSOL) is to locate the objects in an image while only image-level labels are available during the training procedure. In this work, the Selective Feature Category Mapping (SFCM) method is proposed, which introduces the Feature Category Mapping (FCM) and the widely-used selective search method to solve the WSOL task. Our FCM replaces layers after the specific layer in the state-of-the-art CNNs with a set of kernels and learns the weighted pooling for previous feature maps. It is trained with only image-level labels and then map the feature maps to their corresponding categories in the test phase. Together with selective search method, the location of each object is finally obtained. Extensive experimental evaluation on ILSVRC2012 and PASCAL VOC2007 benchmarks shows that SFCM is simple but very effective, and it is able to achieve outstanding classification performance and outperform the state-of-the-art methods in the WSOL task.",
"title": ""
},
{
"docid": "2c704a11e212b90520e92adf85696674",
"text": "The authors in this study examined the function and public reception of critical tweeting in online campaigns of four nationalist populist politicians during major national election campaigns. Using a mix of qualitative coding and case study inductive methods, we analyzed the tweets of Narendra Modi, Nigel Farage, Donald Trump, and Geert Wilders before the 2014 Indian general elections, the 2016 UK Brexit referendum, the 2016 US presidential election, and the 2017 Dutch general election, respectively. Our data show that Trump is a consistent outlier in terms of using critical language on Twitter when compared to Wilders, Farage, and Modi, but that all four leaders show significant investment in various forms of antagonistic messaging including personal insults, sarcasm, and labeling, and that these are rewarded online by higher retweet rates. Building on the work of Murray Edelman and his notion of a political spectacle, we examined Twitter as a performative space for critical rhetoric within the frame of nationalist politics. We found that cultural and political differences among the four settings also impact how each politician employs these tactics. Our work proposes that studies of social media spaces need to bring normative questions into traditional notions of collaboration. As we show here, political actors may benefit from in-group coalescence around antagonistic messaging, which while serving as a call to arms for online collaboration for those ideologically aligned, may on a societal level lead to greater polarization.",
"title": ""
},
{
"docid": "f054e4464f2ef68ad9127afe00108b9a",
"text": "RFID systems often use near-field magnetic coupling to implement communication channels. The advertised operational range of these channels is less than 10 cm and therefore several implemented systems assume that the communication channel is location limited and therefore relatively secure. Nevertheless, there have been repeated questions raised about the vulnerability of these near-field systems against eavesdropping and skimming attacks. In this paper we revisit the topic of RFID eavesdropping and skimming attacks, surveying previous work and explaining why the feasibility of practical attacks is still a relevant and novel research topic. We present a brief overview of the radio characteristics for popular HF RFID standards and present some practical results for eavesdropping experiments against tokens adhering to the ISO 14443 and ISO 15693 standards. We also discuss how an attacker could construct a low-cost eavesdropping device using easy to obtain parts and reference designs. Finally, we present results for skimming experiments against ISO 14443 tokens.",
"title": ""
},
{
"docid": "0c9112aeebf0b43b577c2cfd5f121d39",
"text": "The fundamental objective behind the present study is to demonstrate the visible effect of ComputerAssisted Instruction upon Iranian EFL learners' reading performance, and to see if it has any impact upon this skill in the Iranian EFLeducational settings. To this end, a sample of 50 male and female EFL learners was drawn from an English language institute in Iran. After participating in a proficiency pretest, these participants were assigned into two experimental and control groups, 25 and 25, respectively. An independent sample t-test was administered to find out if there were salient differences between the findings of the two selected groups in their reading test. The key research question was to see providing learners with computer-assisted instruction during the processes of learning and instruction for learners would have an affirmative influence upon the improvement and development of their reading skill. The results pinpointed computer-assisted instruction users' performance was meaningfully higher than that of nonusers (DF 1⁄4 48, P < 05). The consequences revealed that computer-assisted language learning and computer technology application have resulted in a greater promotion of students' reading improvement. In other words, computer-assisted instruction users outperformed the nonusers. The research, therefore, highlights the conclusion that EFL learners' use of computer-assisted instruction has the potential to promote more effective reading ability. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ce05cd217e2e8eafb2503309b6c8e220",
"text": "We present a type-theoretical framework for formal semantics, which leverages two existing well established tools: Grammatical Framework (GF) and Coq. The framework is the semantic equivalent of GF’s resource grammars: every syntactic construction is mapped to a (compositional) semantics. Our tool thus extends the standard GF grammar with a formal semantic backbone. We evaluated our framework on 5 sections of the FraCaS test-suite (174 examples) and significantly improved on the state of the art by 14 percentage points, obtaining 83% accuracy. Our semantics is free software and available at this url: http://github.com/GU-CLASP/FraCoq",
"title": ""
},
{
"docid": "c5b39921ebebb8bbb20fdef471e9d275",
"text": "One popular justification for punishment is the just deserts rationale: A person deserves punishment proportionate to the moral wrong committed. A competing justification is the deterrence rationale: Punishing an offender reduces the frequency and likelihood of future offenses. The authors examined the motivation underlying laypeople's use of punishment for prototypical wrongs. Study 1 (N = 336) revealed high sensitivity to factors uniquely associated with the just deserts perspective (e.g., offense seriousness, moral trespass) and insensitivity to factors associated with deterrence (e.g., likelihood of detection, offense frequency). Study 2 (N = 329) confirmed the proposed model through structural equation modeling (SEM). Study 3 (N = 351) revealed that despite strongly stated preferences for deterrence theory, individual sentencing decisions seemed driven exclusively by just deserts concerns.",
"title": ""
},
{
"docid": "8f2cfb5cb55b093f67c1811aba8b87e2",
"text": "“You make what you measure” is a familiar mantra at datadriven companies. Accordingly, companies must be careful to choose North Star metrics that create a better product. Metrics fall into two general categories: direct count metrics such as total revenue and monthly active users, and nuanced quality metrics regarding value or other aspects of the user experience. Count metrics, when used exclusively as the North Star, might inform product decisions that harm user experience. Therefore, quality metrics play an important role in product development. We present a five-step framework for developing quality metrics using a combination of machine learning and product intuition. Machine learning ensures that the metric accurately captures user experience. Product intuition makes the metric interpretable and actionable. Through a case study of the Endorsements product at LinkedIn, we illustrate the danger of optimizing exclusively for count metrics, and showcase the successful application of our framework toward developing a quality metric. We show how the new quality metric has driven significant improvements toward creating a valuable, user-first product.",
"title": ""
},
{
"docid": "2a39202664217724ea0a49ceb83a82af",
"text": "This article proposes a competitive divide-and-conquer algorithm for solving large-scale black-box optimization problems for which there are thousands of decision variables and the algebraic models of the problems are unavailable. We focus on problems that are partially additively separable, since this type of problem can be further decomposed into a number of smaller independent subproblems. The proposed algorithm addresses two important issues in solving large-scale black-box optimization: (1) the identification of the independent subproblems without explicitly knowing the formula of the objective function and (2) the optimization of the identified black-box subproblems. First, a Global Differential Grouping (GDG) method is proposed to identify the independent subproblems. Then, a variant of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is adopted to solve the subproblems resulting from its rotation invariance property. GDG and CMA-ES work together under the cooperative co-evolution framework. The resultant algorithm, named CC-GDG-CMAES, is then evaluated on the CEC’2010 large-scale global optimization (LSGO) benchmark functions, which have a thousand decision variables and black-box objective functions. The experimental results show that, on most test functions evaluated in this study, GDG manages to obtain an ideal partition of the index set of the decision variables, and CC-GDG-CMAES outperforms the state-of-the-art results. Moreover, the competitive performance of the well-known CMA-ES is extended from low-dimensional to high-dimensional black-box problems.",
"title": ""
},
{
"docid": "d7b0711c45166395689037d21942578d",
"text": "Cipher text-Policy Attribute-Based Proxy Re-Encryption (CP-ABPRE) extends the traditional Proxy Re-Encryption (PRE) by allowing a semi-trusted proxy to transform a cipher text under an access policy to the one with the same plaintext under another access policy (i.e. attribute-based re-encryption). The proxy, however, learns nothing about the underlying plaintext. CP-ABPRE has many real world applications, such as fine-grained access control in cloud storage systems and medical records sharing among different hospitals. Previous CP-ABPRE schemes leave how to be secure against Chosen-Cipher text Attacks (CCA) as an open problem. This paper, for the first time, proposes a new CP-ABPRE to tackle the problem. The new scheme supports attribute-based re-encryption with any monotonic access structures. Despite our scheme is constructed in the random oracle model, it can be proved CCA secure under the decisional q-parallel bilinear Diffie-Hellman exponent assumption.",
"title": ""
},
{
"docid": "7084e2455ea696eec4a0f93b3140d71b",
"text": "Reinforcement learning is a simple, and yet, comprehensive theory of learning that simultaneously models the adaptive behavior of artificial agents, such as robots and autonomous software programs, as well as attempts to explain the emergent behavior of biological systems. It also gives rise to computational ideas that provide a powerful tool to solve problems involving sequential prediction and decision making. Temporal difference learning is the most widely used method to solve reinforcement learning problems, with a rich history dating back more than three decades. For these and many other reasons, devel1 This article is currently not under review for the journal Foundations and Trends in ML, but will be submitted for formal peer review at some point in the future, once the draft reaches a stable “equilibrium” state. ar X iv :1 40 5. 67 57 v1 [ cs .L G ] 2 6 M ay 2 01 4 oping a complete theory of reinforcement learning, one that is both rigorous and useful has been an ongoing research investigation for several decades. In this paper, we set forth a new vision of reinforcement learning developed by us over the past few years, one that yields mathematically rigorous solutions to longstanding important questions that have remained unresolved: (i) how to design reliable, convergent, and robust reinforcement learning algorithms (ii) how to guarantee that reinforcement learning satisfies pre-specified “safely” guarantees, and remains in a stable region of the parameter space (iii) how to design “off-policy” temporal difference learning algorithms in a reliable and stable manner, and finally (iv) how to integrate the study of reinforcement learning into the rich theory of stochastic optimization. In this paper, we provide detailed answers to all these questions using the powerful framework of proximal operators. The most important idea that emerges is the use of primal dual spaces connected through the use of a Legendre transform. This allows temporal difference updates to occur in dual spaces, allowing a variety of important technical advantages. The Legendre transform, as we show, elegantly generalizes past algorithms for solving reinforcement learning problems, such as natural gradient methods, which we show relate closely to the previously unconnected framework of mirror descent methods. Equally importantly, proximal operator theory enables the systematic development of operator splitting methods that show how to safely and reliably decompose complex products of gradients that occur in recent variants of gradient-based temporal difference learning. This key technical innovation makes it possible to finally design “true” stochastic gradient methods for reinforcement learning. Finally, Legendre transforms enable a variety of other benefits, including modeling sparsity and domain geometry. Our work builds extensively on recent work on the convergence of saddle-point algorithms, and on the theory of monotone operators in Hilbert spaces, both in optimization and for variational inequalities. The latter framework, the subject of another ongoing investigation by our group, holds the promise of an even more elegant framework for reinforcement learning. Its explication is currently the topic of a further monograph that will appear in due course. Dedicated to Andrew Barto and Richard Sutton for inspiring a generation of researchers to the study of reinforcement learning. Algorithm 1 TD (1984) (1) δt = rt + γφ ′ t T θt − φt θt (2) θt+1 = θt + βtδt Algorithm 2 GTD2-MP (2014) (1) wt+ 1 2 = wt + βt(δt − φt wt)φt, θt+ 1 2 = proxαth ( θt + αt(φt − γφt)(φt wt) ) (2) δt+ 1 2 = rt + γφ ′ t T θt+ 1 2 − φt θt+ 1 2 (3) wt+1 = wt + βt(δt+ 1 2 − φt wt+ 1 2 )φt , θt+1 = proxαth ( θt + αt(φt − γφt)(φt wt+ 1 2 ) )",
"title": ""
},
{
"docid": "e4347c1b3df0bf821f552ef86a17a8c8",
"text": "Volumetric lesion segmentation via medical imaging is a powerful means to precisely assess multiple time-point lesion/tumor changes. Because manual 3D segmentation is prohibitively time consuming and requires radiological experience, current practices rely on an imprecise surrogate called response evaluation criteria in solid tumors (RECIST). Despite their coarseness, RECIST marks are commonly found in current hospital picture and archiving systems (PACS), meaning they can provide a potentially powerful, yet extraordinarily challenging, source of weak supervision for full 3D segmentation. Toward this end, we introduce a convolutional neural network based weakly supervised self-paced segmentation (WSSS) method to 1) generate the initial lesion segmentation on the axial RECISTslice; 2) learn the data distribution on RECIST-slices; 3) adapt to segment the whole volume slice by slice to finally obtain a volumetric segmentation. In addition, we explore how super-resolution images (2 ∼ 5 times beyond the physical CT imaging), generated from a proposed stacked generative adversarial network, can aid the WSSS performance. We employ the DeepLesion dataset, a comprehensive CTimage lesion dataset of 32, 735 PACS-bookmarked findings, which include lesions, tumors, and lymph nodes of varying sizes, categories, body regions and surrounding contexts. These are drawn from 10, 594 studies of 4, 459 patients. We also validate on a lymph-node dataset, where 3D ground truth masks are available for all images. For the DeepLesion dataset, we report mean Dice coefficients of 93% on RECIST-slices and 76% in 3D lesion volumes. We further validate using a subjective user study, where an experienced ∗Indicates equal contribution. †This work is done during Jinzheng Cai’s internship at National Institutes of Health. Le Lu is now with Nvidia Corp ([email protected]). CN N Initial 2D Segmentation Self-Paced 3D Segmentation CN N CN N CN N Image Image",
"title": ""
},
{
"docid": "d34759a882df6bc482b64530999bcda3",
"text": "The Static Single Assignment (SSA) form is a program representation used in many optimizing compilers. The key step in converting a program to SSA form is called φ-placement. Many algorithms for φ-placement have been proposed in the literature, but the relationships between these algorithms are not well understood.In this article, we propose a framework within which we systematically derive (i) properties of the SSA form and (ii) φ-placement algorithms. This framework is based on a new relation called merge which captures succinctly the structure of a program's control flow graph that is relevant to its SSA form. The φ-placement algorithms we derive include most of the ones described in the literature, as well as several new ones. We also evaluate experimentally the performance of some of these algorithms on the SPEC92 benchmarks.Some of the algorithms described here are optimal for a single variable. However, their repeated application is not necessarily optimal for multiple variables. We conclude the article by describing such an optimal algorithm, based on the transitive reduction of the merge relation, for multi-variable φ-placement in structured programs. The problem for general programs remains open.",
"title": ""
},
{
"docid": "8e7cef98d1d3404dd5101ddde88489ef",
"text": "The present experiments were designed to determine the efficacy of metomidate hydrochloride as an alternative anesthetic with potential cortisol blocking properties for channel catfish Ictalurus punctatus. Channel catfish (75 g) were exposed to concentrations of metomidate ranging from 0.5 to 16 ppm for a period of 60 min. At 16-ppm metomidate, mortality occurred in 65% of the catfish. No mortalities were observed at concentrations of 8 ppm or less. The minimum concentration of metomidate producing desirable anesthetic properties was 6 ppm. At this concentration, acceptable induction and recovery times were observed in catfish ranging from 3 to 810 g average body weight. Plasma cortisol levels during metomidate anesthesia (6 ppm) were compared to fish anesthetized with tricaine methanesulfonate (100 ppm), quinaldine (30 ppm) and clove oil (100 ppm). Cortisol levels of catfish treated with metomidate and clove oil remained at baseline levels during 30 min of anesthesia (P>0.05). Plasma cortisol levels of tricaine methanesulfonate and quinaldine anesthetized catfish peaked approximately eightand fourfold higher (P< 0.05), respectively, than fish treated with metomidate. These results suggest that the physiological disturbance of channel catfish during routine-handling procedures and stress-related research could be reduced through the use of metomidate as an anesthetic. D 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "5d82469913da465c7445359dcdbbc89b",
"text": "There is increasing interest in using synthetic aperture radar (SAR) images in automated target recognition and decision-making tasks. The success of such tasks depends on how well the reconstructed SAR images exhibit certain features of the underlying scene. Based on the observation that typical underlying scenes usually exhibit sparsity in terms of such features, this paper presents an image formation method that formulates the SAR imaging problem as a sparse signal representation problem. For problems of complex-valued nature, such as SAR, a key challenge is how to choose the dictionary and the representation scheme for effective sparse representation. Since features of the SAR reflectivity magnitude are usually of interest, the approach is designed to sparsely represent the magnitude of the complex-valued scattered field. This turns the image reconstruction problem into a joint optimisation problem over the representation of magnitude and phase of the underlying field reflectivities. The authors develop the mathematical framework for this method and propose an iterative solution for the corresponding joint optimisation problem. The experimental results demonstrate the superiority of this method over previous approaches in terms of both producing high-quality SAR images and exhibiting robustness to uncertain or limited data.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.